text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1701–1711 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1156 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1701–1711 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1156 Active Sentiment Domain Adaptation Fangzhao Wu†, Yongfeng Huang†∗, and Jun Yan‡ †Department of Electronic Engineering, Tsinghua University ‡Microsoft Research Asia, Beijing, China [email protected], [email protected], [email protected] Abstract Domain adaptation is an important technology to handle domain dependence problem in sentiment analysis field. Existing methods usually rely on sentiment classifiers trained in source domains. However, their performance may heavily decline if the distributions of sentiment features in source and target domains have significant difference. In this paper, we propose an active sentiment domain adaptation approach to handle this problem. Instead of the source domain sentiment classifiers, our approach adapts the general-purpose sentiment lexicons to target domain with the help of a small number of labeled samples which are selected and annotated in an active learning mode, as well as the domain-specific sentiment similarities among words mined from unlabeled samples of target domain. A unified model is proposed to fuse different types of sentiment information and train sentiment classifier for target domain. Extensive experiments on benchmark datasets show that our approach can train accurate sentiment classifier with less labeled samples. 1 Introduction Sentiment classification is widely known as a domain-dependent problem (Liu, 2012; Pang and Lee, 2008; Blitzer et al., 2007; Pan et al., 2010). This is because different domains usually have many different sentiment expressions. For example, “lengthy” and “boring” are popularly used in Book domain to express negative sentiment. However, they are rare in Kitchen appliance domain. Moreover, the same word or phrase may convey ∗Corresponding author. different sentiments in different domains. For instance, “unpredictable” is frequently used to express positive sentiment in Movie domain (e.g., “The plot of this movie is fun and unpredictable”). However, it tends to be used as a negative word in Kitchen appliance domain (e.g., “Even holding heat is unpredictable. It is just terrible!”). Thus, every domain has many domain-specific sentiment expressions, which cannot be captured by other domains. The performance of directly applying a general sentiment classifier or a sentiment classifier trained in other domains to target domain is usually suboptimal. Since there are a large number of domains in user-generated content, it is impractical to manually annotate enough samples for each domain to train an accurate domain-specific sentiment classifier. Thus, sentiment domain adaptation, which transfers the sentiment classifier trained in a source domain with sufficient labeled data to a target domain with no or scarce labeled data, has been widely studied (Blitzer et al., 2007; Pan et al., 2010; He et al., 2011; Glorot et al., 2011). Existing sentiment domain adaptation methods are mainly based on transfer learning techniques. Many of them try to learn a new feature representation to augment or replace the original feature space in order to reduce the gap of sentiment feature distributions between source and target domains (Pan et al., 2010; Glorot et al., 2011). For example, Blitzer et al. (2007) proposed to learn a latent representation for domain-specific words from both source and target domains by using pivot features as bridge. The advantage of these methods is that no labeled data in target domain is needed. However, when the distributions of sentiment features in source and target domains have significant difference, the performance of domain adaptation will heavily decline (Li et al., 2013). In some cases, the performance of adaptation is even lower than 1701 that without adaptation, which is usually known as negative transfer (Pan and Yang, 2010). In this paper, we propose an active sentiment domain adaptation approach to handle this problem by incorporating both general sentiment information and a small number of actively selected labeled samples from target domain. More specifically, in our approach the general sentiment information extracted from sentiment lexicons is adapted to target domain using domain-specific sentiment similarities among words. The general sentiment information is regarded as a “background” domain to transfer. The word similarities are extracted from unlabeled samples of target domain using both syntactic rules and co-occurrence patterns. Then we actively select and annotate a small number of informative samples from target domain in an active learning manner. These labeled samples are incorporated into our approach to improve the performance of sentiment domain adaptation. A unified model is proposed to incorporate different types of sentiment information to train sentiment classifier for target domain. Extensive experiments were conducted on benchmark datasets. The experimental results show that our approach can train accurate sentiment classifiers and reduce the manual annotation effort. 2 Related Work 2.1 Sentiment Domain Adaptation Sentiment classification is well known as a highly domain-dependent task, and domain adaptation is widely studied in sentiment analysis field to handle this problem (Blitzer et al., 2007; Pan et al., 2010; He et al., 2011; Glorot et al., 2011). Existing sentiment domain adaptation methods are mainly based on transfer learning technique (Pan and Yang, 2010), where sentiment classifiers are trained in one or multiple source domains with sufficient labeled samples, and then applied to target domain where there is no or only scarce labeled samples. In order to reduce the gap of sentiment feature distributions between source and target domains, many sentiment domain adaptation methods try to learn a new feature representation to augment or replace the original feature space. For example, Pan et al. (2010) proposed a sentiment domain adaptation method based on spectral feature alignment (SFA) algorithm. They first manually selected several domain-independent features and computed the associations between domainspecific features and domain-independent features. After that they built a bipartite graph where domain-independent and domain-specific features were regarded as two types of nodes. Then domain-specific features were grouped into several clusters using spectral clustering algorithm. These clusters were used to augment the original feature representations. Glorot et al. (2011) proposed a sentiment domain adaptation method based on a deep learning technique, i.e., Stacked Denoising Autoencoders. They learned the parameters of neural networks using unlabeled samples from both source and target domains, and used the hidden nodes of the neural networks as the latent feature representations of both domains. Then they trained sentiment classifiers using source domain labeled data in this new feature space and applied it to target domain. The advantage of these sentiment domain adaptation methods is that they do not rely on the labeled data in target domain. However, they have a common shortcoming, i.e., when the distributions of sentiment features in source and target domains have significant difference, the performance of domain adaptation will heavily decline (Li et al., 2013). In some cases, negative transfer may happen (Blitzer et al., 2007; Li et al., 2013), which means the performance of adaptation is worse than that without adaptation (Pan and Yang, 2010). Different from many existing sentiment domain adaptation methods, in our approach we adapt the general sentiment information in sentiment lexicons to target domain with the help of a small number of labeled samples which are selected and annotated in an active learning mode. Since the sentiment words in generalpurpose sentiment lexicons usually convey consistent sentiment polarities in different domains, and the actively selected labeled samples contain rich domain-specific sentiment information of target domain, our approach can effectively reduce the risk of negative transfer. The usefulness of labeled samples from target domain in sentiment domain adaptation has been observed by previous research works (Choi and Cardie, 2009; Chen et al., 2011; Li et al., 2013; Wu et al., 2016). For example, Choi and Cardie (2009) proposed to adapt a sentiment lexicon to a specific domain by exploiting both the relations among words which co-occur in the same sentiment expressions and the relations between words and labeled sentiment expressions. However, the 1702 labeled samples used in these methods are randomly selected, while in our approach we actively select informative samples from target domain to annotate. Thus, our approach has the potential to reduce the manual annotation effort. 2.2 Active Learning Active learning is a useful technique in scenarios where unlabeled data is abundant but their labels are difficult or expensive to obtain (Tong and Koller, 2002; Settles, 2010). By actively selecting informative samples to label, active learning can effectively reduce the annotation effort, and improve the classification performance with limited budget (Li et al., 2012). An important problem in active learning is how to evaluate the informativeness of unlabeled samples (Fu et al., 2013). Different methods have been applied to select informative samples, such as uncertainty sampling (Zhu et al., 2010; Yang et al., 2015), queryby-committee (Freund et al., 1997; Li et al., 2013) and so on. In our approach, uncertainty combined with density is used to measure the informativeness of samples. A major difference between our approach and existing active learning methods is that in existing methods the parameters of the initial classifier are either initialized as zero (CesaBianchi et al., 2006) or learned from a set of randomly selected samples (Settles, 2010). In contrast, the initial sentiment classifier in our approach is constructed by adapting the general sentiment information to target domain via the domain-specific sentiment similarities among words. There are a few works that apply active learning methods to sentiment domain adaptation task (Rai et al., 2010; Li et al., 2013). For example, Rai et al. (2010) proposed an online active learning algorithm for sentiment domain adaptation. They started with a sentiment classifier trained on the labeled samples of a source domain. Then they sequentially selected informative samples in target domain to annotate with a probability positively related to classification uncertainty. The newly annotated samples were used to update the sentiment classifier in an online learning manner. Li et al. (2013) proposed another active learning method for cross-domain sentiment classification. In their method they trained two sentiment classifiers, one on the labeled samples of source domain, and the other one on the labeled samples of target domain. Then query-by-committee strategy was used to select the informative instances from target domain. Different from these methods, our approach does not rely on the labeled data of source domains. Instead, in our approach the general sentiment information in sentiment lexicons is actively adapted to target domain, which usually has better generalization ability in various domains than the sentiment classifier trained in a source domain. In addition, our approach can incorporate the domainspecific sentiment similarities among words mined from unlabeled samples of target domain, which are not considered in these methods. 3 Active Sentiment Domain Adaptation 3.1 Notations First we introduce several notations that will be used in remaining part of this paper. Denote the general sentiment information extracted from a general-purpose sentiment lexicon as p ∈RD×1, where D is the vocabulary size. If the ith word is labeled as positive (or negative) in the sentiment lexicon, then pi = +1 (or pi = −1). Otherwise, pi = 0. Following many previous works in sentiment classification field (Blitzer et al., 2007; Pan et al., 2010), here we select linear classifier as sentiment classifier, and denote the linear classification model as w ∈RD×1. We use f(xi, yi, w) to represent the loss of classifying the ith labeled sample in target domain under the classification model w, where f is the classification loss function, xi ∈RD×1 is the feature vector of this sample and yi is its sentiment label. In this paper we focus on binary sentiment classification and yi ∈{+1, −1}. In addition, we select log loss for f. Thus, f(xi, yi, w) = log(1 + exp(−yiwT xi)). Besides, we use S ∈RD×D to represent the sentiment similarities among words extracted from unlabeled samples of target domain. 3.2 Domain-Specific Sentiment Similarities Next we introduce the extraction of domainspecific sentiment similarities among words from unlabeled samples of target domain. Two types of similarities are extracted in this paper. The first one is based on syntactic rules, which is inspired by (Hatzivassiloglou and McKeown, 1997; Huang et al., 2014; Wu and Huang, 2016). If two words have the same POS-tag such as adjective, verb, and adverb, and they are connected by coordinating conjunction “and” in the same sentence, then we regard they convey the same sentiment polarity. In 1703 addition, if two words are connected by adversative conjunction “but” and have the same POS-tag, then they are assumed to have opposite sentiment polarities. Denote Sr ∈RD×D as the sentiment similarities extracted from unlabeled samples according to syntactic rules, and the similarity score between words i and j is defined as: Sr i,j = N s i,j −N o i,j N s i,j + N o i,j + α1 , (1) where Ns i,j and No i,j are the frequencies of words i and j having the same or opposite sentiments respectively according to the syntactic rules, and α1 is a positive smoothing factor. If two words have much higher frequency of sharing the same sentiment than opposite sentiments, then they will have a larger positive sentiment similarity score. Note that Sr i,j can be negative according to Eq. (1). Here we focus on sentiment similarity rather than dissimilarity, and set all the negative values in Sr to zero. The range of Sr i,j is [0, 1]. The second type of sentiment similarities are extracted according to the co-occurrence patterns among words. It is inspired by the observation that words frequently co-occurring with each other not only have a high probability to have similar semantics, but also tend to share similar sentiments (Turney, 2002; Velikovich et al., 2010; Yogatama and Smith, 2014; Tang et al., 2015; Hamilton et al., 2016). In this paper, we compute the co-occurrence between words in the context of document. Denote D as the set of all documents, and Ni d as the frequency of word i appearing in document d. Then, the sentiment similarity score between words i and j based on their co-occurrence patterns is defined as: Sc i,j = P d∈D min{N i d, N j d} P d∈D max{N i d, N j d} + α2 , (2) where α2 is a positive smoothing parameter. If two words frequently co-occur with each other in many documents, then they will have a high sentiment similarity score according to Eq. (2). The range of Sc i,j is also [0, 1]. Denote Sc ∈RD×D as the set of all sentiment similarities extracted according to co-occurrence patterns. The sentiment similarities extracted according to syntactic rules are usually of high accuracy. However, their coverage is limited, because the word pairs detected by these syntactic rules are sparse. In contrast, the coverage of sentiment similarities extracted from co-occurrence patterns is quite wide because document is a long context, while their accuracies are not as high as the similarities based on syntactic rules. Thus, we propose to combine these two types of sentiment similarities to obtain a balance between accuracy and coverage. Denote S ∈RD×D as the final sentiment similarities among words, and Si,j = θSr i,j + (1 −θ)Sc i,j, where θ ∈[0, 1] is the combination coefficient. In this paper we set θ to 0.5, which means that we regard these two types of sentiment similarities as equally important. 3.3 Initial Sentiment Classifier Construction In this section, we introduce the construction of the initial sentiment classifier to start the active learning process. Existing active learning methods usually randomly select a set of unlabeled samples to annotate and then train the initial classifier on them (Settles, 2010). However, these randomly selected samples may be redundant and not informative enough. In this paper, we propose to build the initial sentiment classifier by adapting the general sentiment information to target domain via domain-specific sentiment similarities as follows: w0 = arg min w − D X i=1 piwi + α D X i=1 X j̸=i Si,j(wi −wj)2, (3) where w0 ∈RD×1 is the initial sentiment classifier, α is a positive regularization coefficient, pi is the prior sentiment polarity of word i in sentiment lexicons, and Si,j is the sentiment similarity score between words i and j. Eq. (3) is motivated by (Bengio et al., 2006), and the quadratic cost criterion is equivalent to label propagation. In Eq. (3), −PD i=1 piwi means that if a word i is labeled as a positive (or negative) word in a generalpurpose sentiment lexicon, i.e., pi > 0 (or pi < 0), then we constrain that its sentiment weight in the sentiment classifier is also positive (or negative). Otherwise, a penalty will be added to the objective function. In addition, PD i=1 P j̸=i Si,j(wi −wj)2 represents that if two words share high sentiment similarity, then we constrain they have similar sentiment weights in sentiment classifier. For example, if we find that “great” and “easy” have high sentiment similarities in Kitchen appliances domain (e.g., “This is a great pan and easy to wash”), and “great” is a positive sentiment word in many sentiment lexicons, then we can infer that “easy” may also be a positive sentiment word in this domain by propagating the sentiment information from 1704 “great” to “easy”. In this way, the general sentiment information can be adapted to many domainspecific sentiment expressions in target domain. 3.4 Query Strategy Active learning methods iteratively select the most informative instances to label and add them to the training set (Settles, 2010). Thus, an important issue in these methods is how to measure the informativeness of unlabeled samples. In this paper, we select classification uncertainty as the informativeness measure, which has been proven effective in many active learning methods (Zhu et al., 2010; Yang et al., 2015). Since we focus on binary sentiment classification and the classification loss function is log loss, the classification uncertainty of an unlabeled instance x is defined as: U(x) = 1 − 1 − 2 1 + exp(−wT x) , (4) where w is the linear sentiment classification model. The range of U(x) is [0, 1]. If |wT x| is large, which means that current sentiment classifier is confident in classifying this instance, then the uncertainty of x (i.e., U(x)) will be low. If |wT x| is close to 0, then the sentiment classifier is very uncertain about this instance, probably because the sentiment expressions in this instance are not covered by current sentiment classifier, and the uncertainty of the instance x will be high. In this case, annotating this instance and adding it to the training set are beneficial, because it can provide the information of unknown sentiment expressions and has the potential to quickly improve the quality of target domain sentiment classifier. However, many researchers have found that unlabeled instances with high uncertainties can be outliers, whose labels are useless and even misleading (Settles, 2010; Zhu et al., 2010). Thus, here we combine uncertainty with representativeness to avoid outliers. Density is proven to be an effective measure of representativeness in active learning methods (Zhu et al., 2010; Hajmohammadi et al., 2015). Here we use the k-nearest neighbour based density proposed by Zhu et al. (2010) as the representativeness measure, which is formulated as: D(x) = 1 k X xi∈N (x) xT xi ∥x∥2 · ∥xi∥2 , (5) where N(x) is the set of k most similar instances of x. The final informativeness score of an unlabeled sample is a linear combination of uncertainty and density which is formulated as follows: I(x) = η(t)U(x) + (1 −η(t))D(x), (6) where η(t) ∈[0, 1] is the combination coefficient at the tth iteration. In this paper, we select a monotonically increasing function for η(t), i.e., η(t) = 1 1+exp(2−4t T ), where T is the total number of iterations. It means that at initial iterations we put more emphasis on instances with high representativeness, because the initial sentiment classifier built by adapting the general sentiment information via the domain-specific sentiment similarities is relatively weak, and we prefer to select instances with more popular sentiment expressions to annotate. As more and more labeled samples are added to the training set and the sentiment classifier becomes stronger, we gradually focus on more difficult instances, i.e., those having higher classification uncertainty scores. 3.5 Active Domain Adaptation Based on previous discussions, in this section we introduce the complete procedure of our active sentiment domain adaptation (ASDA) approach. Different from existing sentiment domain adaptation methods which rely on the sentiment classifier trained in source domains to transfer, in our approach we regard the general sentiment information in sentiment lexicons as the “background” domain and adapt it to target domain with the help of a small number of labeled samples which are selected and annotated in an active learning mode. First, we build an initial sentiment classifier according to Eq. (3) by adapting the general sentiment information to target domain using the domainspecific sentiment similarities among words mined from unlabeled samples of target domain. Second, we compute the density of each unlabeled sample in U according to Eq. (5). Then we repeat following steps until the annotation budget has run out. First, we compute the uncertainty of each unlabeled sample in U according to Eq. (4), and further we compute their informativeness by combining uncertainty with density according to Eq. (6). Next, we select the unlabeled sample with the highest informativeness from U and manually annotate its sentiment polarity. Then we add it to the set of labeled samples L and remove it from U. After that we retrain the sentiment classifier for target domain based on the general sentiment information p, the labeled samples L, and the domain1705 specific sentiment similarities S as follows: arg min w − D X i=1 piwi + α D X i=1 X j̸=i Si,j(wi −wj)2 + β X xi∈L log(1 + exp(−yiwT xi)) + λ∥w∥2 2, (7) where α, β, and λ are nonnegative coefficients. By the term −PD i=1 piwi we constrain that the target domain sentiment classifier learned by our approach is consistent with the general sentiment information. Through this way, the general sentiment information extracted from sentiment lexicons can be adapted to target domain. The term PD i=1 P j̸=i Si,j(wi −wj)2 is motivated by label propagation (Bengio et al., 2006). If two words tend to have high sentiment similarity with each other according to many unlabeled samples of target domain, then we constrain that their sentiment weights in the target domain sentiment classifier are also similar. The term P xi∈L log(1 + exp(−yiwT xi)) means that we hope to minimize the empirical classification loss on labeled samples of target domain. By this term the sentiment information in the labeled samples is incorporated into the learning of target domain sentiment classifier. The L2-norm regularization term is introduced to control model complexity. The sentiment classifier trained in Eq. (7) is further used at the next iteration of active sentiment domain adaptation until all the budget of manual annotation has been used. Then we obtain the final sentiment classifier of target domain. The complete algorithm of our active sentiment domain adaptation (ASDA) approach is summarized in Algorithm 1. Algorithm 1 Active sentiment domain adaptation. 1: Input: The set of unlabeled samples U, the general sentiment information p, the domain-specific sentiment similarities S, and the total annotation budget N. 2: Output: Target domain sentiment classifier w. 3: Train the initial sentiment classifier w0 (Eq. (3)). 4: Compute the density of each sample xi in U (Eq. (5)). 5: Initialize the set of labeled samples L = ∅, the iteration number t = 0, and the sentiment classifier w = w0. 6: while t < N do 7: t = t + 1. 8: Compute the uncertainty score of each sample xi in U (Eq. (4)). 9: Compute the informativeness score of each sample xi in U (Eq. (6)). 10: Select x∗from U which has the highest informativeness score. 11: Annotate x∗and obtain its sentiment label y. 12: L = L + {x∗, y}, U = U −x∗. 13: Update sentiment classifier w according to Eq. (7). 14: end while 4 Experiments 4.1 Datasets The dataset used in our experiments is the Amazon product review dataset1 collected by Blitzer et al. (2007), which is widely used in sentiment analysis and domain adaptation research (Pan et al., 2010; Bollegala et al., 2011). This dataset contains product reviews in four domains, i.e., Book, DVD, Electronics, and Kitchen appliances. In each domain, 1,000 positive and 1,000 negative reviews as well as a large number of unlabeled samples are included. The detailed statistics of this dataset are summarized in Table 1. Book DVD Electronics Kitchen positive 1,000 1,000 1,000 1,000 negative 1,000 1,000 1,000 1,000 unlabeled 973,194 122,438 21,009 17,856 Table 1: The statistics of the Amazon dataset. Following many previous works (Blitzer et al., 2007; Bollegala et al., 2011), unigrams and bigrams were used to build feature vectors in our experiments. We randomly split the labeled samples in each domain into two parts with equal size. The first part was used as test data, and the second part was used as the pool of “unlabeled” samples to perform active learning. The general sentiment information was extracted from Bing Liu’s sentiment lexicon2 (Hu and Liu, 2004), which is one of the state-of-the-art general-purpose sentiment lexicons. The domain-specific sentiment similarities among words were extracted from the large-scale unlabeled samples. The total number of samples actively selected by our approach to annotate was set to 100. The values of α, β, and λ were set to 0.1, 1, and 1 respectively. We repeated each experiment 10 times independently and the average results were reported. 4.2 Algorithm Effectiveness First we conducted several experiments to explore the effectiveness of our active sentiment domain adaptation (ASDA) approach. We hope to answer two questions via these experiments: 1) whether the domain-specific sentiment similarities among words mined from unlabeled samples of target 1https://www.cs.jhu.edu/˜mdredze/ datasets/sentiment/ 2https://www.cs.uic.edu/liub/FBS/ sentiment-analysis.html 1706 Book DVD Electronics Kitchen 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 Accuracy Lexicon Lexicon+SentiSim Lexicon+SentiSim+Label Figure 1: The performance of our approach with different combinations of sentiment information. Lexicon, SentiSim, and Label represent the general-purpose sentiment lexicon, the domainspecific sentiment similarities among words, and a small number of actively selected and annotated samples in target domain respectively. domain are useful for adapting the general sentiment information to target domain; 2) whether a small number of samples which are actively selected and annotated in target domain can help improve the domain adaptation performance. In our experiments, we implemented different versions of our ASDA approach using different combinations of sentiment information. The first one is Lexicon, which means only using the general sentiment information and no domain adaptation is conducted. It serves as a baseline. The second one is Lexicon+SentiSim, which means adapting general sentiment information to target domain using domain-specific sentiment similarities, but labeled samples of target domain are not incorporated. The third one is Lexicon+SentiSim+Label, which is the complete ASDA approach. The experimental results are summarized in Fig. 1. According to Fig. 1, the performance of Lexicon is suboptimal. This is because the general sentiment lexicons cannot capture the domain-specific sentiment expressions in target domain (Choi and Cardie, 2009). Lexicon+SentiSim performs significantly better than Lexicon, which validates that the sentiment similarities among words extracted from unlabeled samples of target domain contain rich domain-specific sentiment information, and can help propagate the general sentiment information to many domain-specific sentiment expressions. Besides, after incorporating a small number of labeled samples which are actively selected and annotated by our approach in an active learning mode, the performance of our sentiment domBook DVD Electronics Kitchen 0.7 0.75 0.8 0.85 Accuracy ASDA_Random ASDA_Constant ASDA_Dynamic Figure 2: The performance of our approach with labeled samples selected by different strategies. ain adaptation approach is significantly improved. This is because although these labeled samples are in limited size and cannot cover all the sentiment expressions in target domain, they can provide sentiment information of popular domain-specific sentiment expressions, which can be propagated to other sentiment expressions in target domain during the domain adaptation process. Thus, above experimental results validate the effectiveness of our approach. We also conducted several experiments to verify the advantage of the actively selected samples over randomly selected samples and validate the effectiveness of our active learning algorithm. We also compared the dynamic weighting scheme for combining uncertainty and density with the constant weighting scheme. The experimental results are summarized in Fig. 2. According to Fig. 2, our approach with actively selected samples performs better than that with randomly selected samples. It indicates that these actively selected samples are more informative than randomly selected samples for sentiment domain adaptation. In addition, our approach with dynamic weighting scheme in combining uncertainty and density outperforms that with constant weighting scheme, which implies that it is beneficial to emphasize representative samples at initial iterations and gradually focus on difficult samples at later iterations. Thus, the experimental results validate the effectiveness of our active learning algorithm. 4.3 Performance Evaluation In this section we conducted experiments to evaluate the performance of our approach by comparing it with several baseline methods. The methods to be compared include: 1) MPQA and BingLiu, using two state-of-the-art sentiment lexicons, 1707 i.e., MPQA (Wilson et al., 2005) and Bing Liu’s lexicon (Hu and Liu, 2004) for sentiment classification following the suggestions in (Hu and Liu, 2004); 2) SVM, LS, and LR, three popular supervised sentiment classification methods, i.e., support vector machine (Pang et al., 2002), least squares (Hu et al., 2013) and logistic regression (Wu et al., 2015); 3) ZIAL, the zero initialized active learning method (Cesa-Bianchi et al., 2006); 4) LIAL, the active learning method initialized by randomly selected labeled data (Settles, 2010); 5) SCL and SFA, two famous sentiment domain adaptation methods proposed in (Blitzer et al., 2007) and (Pan et al., 2010) respectively; 6) ILP, adapting sentiment lexicons to target domain via integer linear programming (Choi and Cardie, 2009); 7) AODA, the active online domain adaptation method (Rai et al., 2010); 8) ALCD, the active learning method for cross-domain sentiment classification (Li et al., 2013); 9) ASDA, our active sentiment domain adaptation approach. For above methods, if labeled target domain samples are needed in training, the number of labeled samples was set to 100, and if source domain labeled samples are needed in training, the number of labeled samples was set to 1,000. The parameters in baseline methods were tuned via cross-validation. The experimental results are summarized in Table 2. Book DVD Electronics Kitchen Acc Fscore Acc Fscore Acc Fscore Acc Fscore MPQA 0.5953 0.5673 0.6149 0.5936 0.6150 0.6070 0.6392 0.6258 BingLiu 0.6015 0.6048 0.6539 0.6604 0.6248 0.6320 0.6765 0.6930 SVM 0.6580 0.6511 0.6688 0.6652 0.7138 0.7129 0.7386 0.7412 LS 0.6543 0.6542 0.6692 0.6687 0.7194 0.7185 0.7479 0.7465 LR 0.6606 0.6582 0.6774 0.6742 0.7257 0.7226 0.7492 0.7480 RIAL 0.6693 0.6663 0.6850 0.6821 0.7310 0.7299 0.7574 0.7568 LIAL 0.6756 0.6731 0.6866 0.6838 0.7374 0.7360 0.7599 0.7595 SCL 0.7233 0.7201 0.7469 0.7438 0.7768 0.7730 0.8099 0.8095 SFA 0.7307 0.7285 0.7513 0.7485 0.7846 0.7812 0.8174 0.8153 ILP 0.6942 0.6931 0.7153 0.7124 0.7463 0.7445 0.7793 0.7768 AODA 0.6928 0.6912 0.7172 0.7165 0.7518 0.7512 0.7698 0.7690 ALCD 0.7237 0.7221 0.7369 0.7364 0.7768 0.7788 0.7979 0.7970 ASDA 0.7508 0.7501 0.7764 0.7759 0.8014 0.8011 0.8329 0.8328 Table 2: Sentiment classification performance of different methods in different domains. Acc and Fscore represent accuracy and macro-averaged Fscore respectively. According to Table 2, the performance of directly applying sentiment lexicons to target domain is suboptimal. This is because there are many domain-specific sentiment expressions that are not covered by these general-purpose sentiment lexicons (Choi and Cardie, 2009). In addition, the performance of supervised sentiment classification methods such as SVM, LS, and LR is also 200 400 600 800 1000 0.65 0.7 0.75 0.8 0.85 Number of labeled samples Accuracy ASDA SVM Figure 3: The performance of ASDA and SVM with different numbers of labeled samples. limited, because the labeled samples for training are extremely scarce. The active learning methods such as ZIAL (Cesa-Bianchi et al., 2006) and LIAL (Settles, 2010) perform relatively better, because they can actively select informative samples to annotate and learn. Our approach can outperform both of them. This is because besides the labeled samples, our approach also adapts the general sentiment information in sentiment lexicons to target domain and incorporates it into the learning of target domain sentiment classifier. Our approach also performs better than state-of-the-art domain adaptation methods such as SCL (Blitzer et al., 2007) and SFA (Pan et al., 2010). It implies that a small number of actively selected labeled samples from target domain are beneficial for sentiment domain adaptation. ILP (Choi and Cardie, 2009) tries to adapt a sentiment lexicon to target domain, which is similar with our approach. ILP relies on labeled samples to extract the relations among words and relations between words and sentiment expressions. However, labeled samples in target domain are usually limited and the sentiment information in many unlabeled samples is not exploited in ILP. Thus, our approach can outperform it. Similar with our approach, AODA (Rai et al., 2010) and ALCD (Li et al., 2013) also apply active learning to domain adaptation. The major difference is that in our approach the general sentiment information extracted from sentiment lexicons is adapted to target domain, while in AODA and ALCD the sentiment classifier trained in source domains is transferred. The superior performance of our approach implies that the general sentiment information has better generalization ability than the sentiment classifier trained in a specific source domain, and is more suitable for sentiment domain adaptation. 1708 −4 −3 −2 −1 0 0.72 0.74 0.76 0.78 0.8 0.82 0.84 log10(α) Accuracy Kitchen Electronics DVD Book (a) Parameter α. −2 −1.5 −1 −0.5 0 0.5 1 0.7 0.75 0.8 0.85 log10(β) Accuracy Kitchen Electronics DVD Book (b) Parameter β. Figure 4: The influence of the parameter settings of α and β on the performance of our approach. We further conducted several experiments to validate the advantage of our approach in training accurate sentiment classifier for target domain with only a few labeled samples. We varied the annotation budget, i.e., the number of labeled samples, from 100 to 1,000. The learning curve of our ASDA approach in Book domain is shown in Fig. 3. We also included a purely supervised sentiment classification method, i.e., SVM, in Fig. 3 as a baseline for comparison. Fig. 3 shows that our ASDA approach can consistently outperform SVM when the same number of labeled samples are used. The performance advantage of our approach is more significant when labeled samples are scarce. For example, the performance of our approach with only 200 labeled samples is similar to SVM with more than 800 labeled samples. Thus, the experimental results validate that by adapting the general sentiment information to target domain and selecting the most informative samples to annotate and learn, our approach can effectively reduce the manual annotation effort, and can train accurate sentiment classifier for target domain with much less labeled samples. 4.4 Parameter Analysis In this section, we conducted several experiments to explore the influence of parameter settings on the performance of our approach. α and β are the two most important parameters in our approach, which control the relative importance of domainspecific sentiment similarities and the actively selected samples in training sentiment classifier for target domain. The experimental results of parameters α and β are summarized in Fig. 4. According to Fig. 4, when α and β are too small, the performance of our approach is not optimal. This is because the useful sentiment information in domain-specific sentiment similarities mined from unlabeled samples and the actively selected labeled samples of target domain is not fully exploited. Thus, the performance of our approach improves when these parameters increase from a small value. However, when these parameters become too large, the performance of our approach starts to decline. This is because when β is too large the sentiment classifier learned by our approach is mainly decided by the limited labeled samples, and the general sentiment information extracted from sentiment lexicons is not fully exploited. When α is too large, the information in domain-specific sentiment similarities is overemphasized, and many different words will have nearly the same sentiment weights. Thus, the performance of our approach in these scenarios is also not optimal. A moderate value of α and β is most suitable for our approach. 5 Conclusion In this paper we present an active sentiment domain adaptation approach to train accurate sentiment classifier for target domain with less labeled samples. In our approach, the general sentiment information in sentiment lexicons is adapted to target domain with the help of a small number of labeled samples which are selected and annotated in an active learning mode. Both classification uncertainty and density are considered when selecting informative samples to label. In addition, we extract domain-specific sentiment similarities among words from unlabeled samples of target domain based on both syntactic rules and cooccurrence patterns, and incorporate them into the domain adaptation process to propagate the general sentiment information to many domain-specific sentiment words in target domain. We also propose a unified model to incorporate different types of sentiment information to train sentiment classifier for target domain. Experimental results on benchmark datasets show that our approach can train accurate sentiment classifier and at same time reduce the manual annotation effort. Acknowledgements This research is supported by the Key Research Project of the Ministry of Science and Technology of China (Grant no. 2016YFB0800402) and the Key Program of National Natural Science Foundation of China (Grant nos. U1536201, U1536207, and U1405254). 1709 References Yoshua Bengio, Olivier Delalleau, and Nicolas Le Roux. 2006. Label propagation and quadratic criterion. Semi-supervised learning 10. John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL. volume 7, pages 440–447. http://aclweb.org/anthology-new/P/P07/P07-1056. Danushka Bollegala, David Weir, and John Carroll. 2011. Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification. In ACL:HLT. pages 132–141. http://aclweb.org/anthology/P11-1014. Nicolo Cesa-Bianchi, Claudio Gentile, and Luca Zaniboni. 2006. Worst-case analysis of selective sampling for linear classification. Journal of Machine Learning Research 7(Jul):1205–1230. Minmin Chen, Kilian Q Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In NIPS. pages 2456–2464. Yejin Choi and Claire Cardie. 2009. Adapting a polarity lexicon using integer linear programming for domain-specific sentiment classification. In EMNLP. pages 590–598. http://aclweb.org/anthology/D09-1062. Yoav Freund, H. Sebastian Seung, Eli Shamir, and Naftali Tishby. 1997. Selective sampling using the query by committee algorithm. Machine Learning 28(2-3):133–168. http://dx.doi.org/10.1023/A:1007330508534. Yifan Fu, Xingquan Zhu, and Bin Li. 2013. A survey on instance selection for active learning. Knowledge and Information Systems 35(2):249–283. https://doi.org/10.1007/s10115-012-0507-8. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML. pages 513–520. Mohammad Sadegh Hajmohammadi, Roliana Ibrahim, Ali Selamat, and Hamido Fujita. 2015. Combination of active learning and self-training for cross-lingual sentiment classification with density analysis of unlabelled samples. Information sciences 317:67–77. http://dx.doi.org/10.1016/j.ins.2015.04.003. William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing domain-specific sentiment lexicons from unlabeled corpora. In EMNLP. pages 595–605. http://aclweb.org/anthology/D/D16/D16-1057. Vasileios Hatzivassiloglou and Kathleen R McKeown. 1997. Predicting the semantic orientation of adjectives. In ACL. pages 174–181. http://aclweb.org/anthology/P/P97/P97-1023. Yulan He, Chenghua Lin, and Harith Alani. 2011. Automatically extracting polarity-bearing topics for cross-domain sentiment classification. In ACL:HLT. pages 123–131. http://aclweb.org/anthology/P111013. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD. pages 168–177. http://doi.acm.org/10.1145/1014052.1014073. Xia Hu, Lei Tang, Jiliang Tang, and Huan Liu. 2013. Exploiting social relations for sentiment analysis in microblogging. In WSDM. pages 537–546. http://doi.acm.org/10.1145/2433396.2433465. Sheng Huang, Zhendong Niu, and Chongyang Shi. 2014. Automatic construction of domain-specific sentiment lexicon based on constrained label propagation. Knowledge-Based Systems 56:191–200. http://dx.doi.org/10.1016/j.knosys.2013.11.009. Lianghao Li, Xiaoming Jin, Sinno Jialin Pan, and Jian-Tao Sun. 2012. Multi-domain active learning for text classification. In KDD. pages 1086–1094. http://doi.acm.org/10.1145/2339530.2339701. Shoushan Li, Yunxia Xue, Zhongqing Wang, and Guodong Zhou. 2013. Active learning for cross-domain sentiment classification. In IJCAI. pages 2127– 2133. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies 5(1):1–167. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Crossdomain sentiment classification via spectral feature alignment. In WWW. ACM, pages 751–760. http://doi.acm.org/10.1145/1772690.1772767. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. TKDE 22(10):1345–1359. http://dx.doi.org/10.1109/TKDE.2009.191. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval 2(1-2):1–135. http://dx.doi.org/10.1561/1500000011. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In EMNLP. pages 79– 86. https://doi.org/10.3115/1118693.1118704. Piyush Rai, Avishek Saha, Hal Daum´e III, and Suresh Venkatasubramanian. 2010. Domain adaptation meets active learning. In Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing. pages 27– 32. http://aclweb.org/anthology/W10-0104. Burr Settles. 2010. Active learning literature survey. University of Wisconsin, Madison 52(55-66):11. 1710 Jian Tang, Meng Qu, and Qiaozhu Mei. 2015. Pte: Predictive text embedding through large-scale heterogeneous text networks. In KDD. ACM, pages 1165–1174. http://doi.acm.org/10.1145/2783258.2783307. Simon Tong and Daphne Koller. 2002. Support vector machine active learning with applications to text classification. The Journal of Machine Learning Research 2:45–66. http://dx.doi.org/10.1162/153244302760185243. Peter D Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In ACL. pages 417–424. http://dx.doi.org/10.3115/1073083.1073153. Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Hannan, and Ryan McDonald. 2010. The viability of web-derived polarity lexicons. In NAACL. pages 777–785. http://www.aclweb.org/anthology/N101119. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In EMNLP. pages 347– 354. http://dx.doi.org/10.3115/1220575.1220619. Fangzhao Wu and Yongfeng Huang. 2016. Sentiment domain adaptation with multiple sources. In ACL. pages 301–310. http://aclweb.org/anthology/P161029. Fangzhao Wu, Yangqiu Song, and Yongfeng Huang. 2015. Microblog sentiment classification with contextual knowledge regularization. In AAAI. pages 2332–2338. Fangzhao Wu, Sixing Wu, Yongfeng Huang, Songfang Huang, and Yong Qin. 2016. Sentiment domain adaptation with multi-level contextual sentiment knowledge. In CIKM. ACM, pages 949–958. https://doi.org/10.1145/2983323.2983851. Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G. Hauptmann. 2015. Multi-class active learning by uncertainty sampling with diversity maximization. International Journal of Computer Vision 113(2):113–127. http://dx.doi.org/10.1007/s11263-014-0781-x. Dani Yogatama and Noah A. Smith. 2014. Making the most of bag of words: Sentence regularization with alternating direction method of multipliers. In ICML. pages 656–664. Jingbo Zhu, Huizhen Wang, Benjamin K Tsou, and Matthew Ma. 2010. Active learning with sampling by uncertainty and density for data annotations. IEEE Transactions on Audio, Speech, and Language Processing 18(6):1323–1331. http://dx.doi.org/10.1109/TASL.2009.2033421. 1711
2017
156
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1712–1721 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1157 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1712–1721 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1157 Volatility Prediction using Financial Disclosures Sentiments with Word Embedding-based IR Models Navid Rekabsaz1, Mihai Lupu1, Artem Baklanov2, Allan Hanbury1, Alexander D¨ur3, Linda Anderson1 1 3TU WIEN 2International Institute for Applied Systems Analysis (IIASA) 2N.N. Krasovskii Institute of Mathematics and Mechanics 1{family name}@ifs.tuwien.ac.at [email protected] [email protected] Abstract Volatility prediction—an essential concept in financial markets—has recently been addressed using sentiment analysis methods. We investigate the sentiment of annual disclosures of companies in stock markets to forecast volatility. We specifically explore the use of recent Information Retrieval (IR) term weighting models that are effectively extended by related terms using word embeddings. In parallel to textual information, factual market data have been widely used as the mainstream approach to forecast market risk. We therefore study different fusion methods to combine text and market data resources. Our word embedding-based approach significantly outperforms state-ofthe-art methods. In addition, we investigate the characteristics of the reports of the companies in different financial sectors. 1 Introduction Financial volatility is an essential indicator of instability and risk of a company, sector or economy. Volatility forecasting has gained considerable attention during the last three decades. In addition to using historic stock prices, new methods in this domain use sentiment analysis to exploit various text resources, such as financial reports (Kogan et al., 2009; Wang et al., 2013; Tsai and Wang, 2014; Nopp and Hanbury, 2015), news (Kazemian et al., 2014; Ding et al., 2015), message boards (Nguyen and Shirai, 2015), and earning calls (Wang and Hua, 2014). An interesting resource of textual information are the companies’ annual disclosures, known as 10-K filing reports. They contain comprehensive information about the companies’ business as well as risk factors. Specifically, section Item 1A - Risk Factors of the reports contains information about the most significant risks for the company. These reports are however long, redundant, and written in a style that makes them complex to process. Dyer et al. (2016) notes that: “10-K reports are getting more redundant and complex [...] (it) requires a reader to have 21.6 years of formal education to fully comprehend”. Dyer et al. also analyse the topics discussed in the reports and observe a constant increase over the years in both the length of the documents as well as the number of topics. They claim that the increase in length is not the result of economic factors but is due to verboseness and redundancy in the reports. They suggest that only the risk factors topic appears to be useful and informative to investors. Their analysis motivates us to study the effectiveness of the Risk Factors section for volatility prediction. Our research builds on previous studies on volatility prediction and information analysis of 10-K reports using sentiment analysis (Kogan et al., 2009; Tsai and Wang, 2014; Wang et al., 2013; Nopp and Hanbury, 2015; Li, 2010; Campbell et al., 2014), in the sense that since the reports are long (average length of 5000 words), different approaches are required, compared with studies of sentiment analysis on short-texts. Such previous studies on 10-K reports have mostly used the data before 2008 and there is little work on the analysis of the informativeness and effectiveness of the recent reports with regards to volatility prediction. We will indeed show that the content of the reports changes significantly not only before and after 2008, but rather in a cycle of 3-4 years. In terms of use of the textual content for volatility prediction, this paper shows that state-ofthe-art Information Retrieval (IR) term weighting models, which benefit from word embedding information, have a significantly positive impact on prediction accuracy. The most recent study on the topic (Tsai and Wang, 2014) used related terms obtained by word embeddings to expand the 1712 lexicon of sentiment terms. In contrast, similar to Rekabsaz et al. (2016b), we define the weight of each lexicon term by extending it to the similar terms in the document. The significant improvement of this approach for document retrieval by capturing the importance of the terms motivates us to apply it on sentiment analysis. We extensively evaluate various state-of-the-art sentiment analysis methods to investigate the effectiveness of our approach. In addition to text, factual market data (i.e. historical prices) provide valuable resources for volatility prediction e.g. in the framework of GARCH models (Engle, 1982). An emerging question is how to approach the combination of the textual and factual market information. We propose various methods for this issue and show the performance and characteristics of each. The financial system covers a wide variety of industries, from daily-consumption products to space mission technologies. It is intuitive to consider that the factors of instability and uncertainty are different between the various sectors while similar inside them. We therefore also analyse the sentiment of the reports of each sector separately and study their particular characteristics. The present study shows the value of information in the 10-K reports for volatility prediction. Our proposed approach to sentiment analysis significantly outperforms state-of-the-art methods (Kogan et al., 2009; Tsai and Wang, 2014; Wang et al., 2013). We also show that performance can be further improved by effectively combining textual and factual market information. In addition, we shed light on the effects of tailoring the analysis to each sector: despite the reasonable expectation that domain-specific training would lead to improvements, we show that our general model generalizes well and outperforms sector-specific trained models. The remainder of the paper is organized as follows: in the next section, we review the state-ofthe-art and related studies. Section 3 formulates the problem, followed by a detailed explanation of our approach in Section 4. We explain the dataset and settings of the experiments in Section 5, followed by the full description of the experiments in Section 6. We conclude the work in Section 7. 2 Related Work Market prediction has been attracting much attention in recent years in the natural language processing community. Kazemian et al. (2014) use sentiment analysis for predicting stock price movements in a simulated security trading system using news data, showing the advantages of the method against simple trading strategies. Ding et al. (2015) address a similar objective while using deep learning to extract and learn events in the news. Xie et al. (2013) introduce a semantic treebased model to represent news data for predicting stock price movement. Luss et al. (2015) also exploit news in combination with return prices to predict intra-day price movements. They use the Multi Kernel Learning (MKL) algorithm for combining the two features. The combination shows improvement in final prediction in comparison to using each of the features alone. Motivated by this study, we investigate the performance of the MKL algorithm as one of the methods to combine the textual with non-textual information. Other data resources, such as stocks’ message boards, are used by Nguyen and Shirai (2015) to study topic modelling for aspect-based sentiment analysis. Wang and Hua (2014) investigate the sentiment of the transcript of earning calls for volatility prediction using the Gaussian Copula regression model. While the mentioned studies use short-length texts (sentence or paragraph level), approaching long texts (document level) for market prediction is mainly based on n-gram bag of words methods. Nopp and Hanbury (2015) study the sentiment of banks’ annual reports to assess banking systems risk factors using a finance-specific lexicon, provided by Loughran and McDonald (2011), in both unsupervised and supervised manner. More directly related to the informativeness of the 10-K reports for volatility prediction, Kogan et al. (2009) use a linear Support Vector Machine (SVM) algorithm on the reports published between 1996–2006. Wang et al. (2013) improve upon this by using the Loughran and McDonald (2011) lexicon, observing improvement in the prediction. Later, Tsai and Wang (2014) apply the same method as Wang et al. (2013) while additionally using word embedding to expand the financial lexicon. We reproduce all the methods in these studies, and show the advantage of our sentiment analysis approach. 1713 3 Problem Formulation In this section, we formulate the volatility forecasting problem and the prediction objectives of our experiments. Similar to previous studies (Christiansen et al., 2012; Kogan et al., 2009; Tsai and Wang, 2014), volatility is defined as the natural log of the standard deviation of (adjusted) return prices in a window of τ days. This definition is referred to as standard volatility (Li and Hong, 2011) or realized volatility (Liu and Tse, 2013), defined as follows: v[s,s+τ] = ln   sPs+τ t=s (rt −¯r)2 τ   (1) where rt is the return price and ¯r the mean of return prices. The return price is calculated by rt = ln(Pt)−ln(Pt−1), where Pt is the (adjusted) closing price of a given stock at the trading date t. Given an arbitrary report i, we define a prediction label yk i as the volatility of the stock of the reporting company in the kth quarter-sized window starting from the issue date of the report si: yk i = v[si+64(k−1),si+64k] (2) Every quarter is considered as per convention, 64 working days, while the full year is assumed to have 256 working days. We use 8 learners for labels y1 to y8. For brevity, unless otherwise mentioned, we report the volatility of the first year by calculating the mean of the first four quartiles after the publication of each report. 4 Methodology We first describe our text sentiment analysis methods, followed by the features obtained from factual market data, and finally explain the methods to combine textual and market feature sets. 4.1 Sentiment Analysis Similar to previous studies (Nopp and Hanbury, 2015; Wang et al., 2013), we extract the keyword set from a finance-specific lexicon (Loughran and McDonald, 2011) using the positive, negative, and uncertain groups, stemmed using the Porter stemmer. We refer to this keyword set as Lex. Tsai and Wang (2014) expanded this set by adding the top 20 related terms to each term to the original set. The related terms are obtained using the Word2Vec (Mikolov et al., 2013) model, built on the corpus of all the reports, with Cosine similarity. We also use this expanded set in our experiments and refer to it as LexExt. The following word weighting schemes are commonly used in Information Retrieval and we consider them as well in our study: TC : log(1 + tcdi(t)) TF : log(1+tcdi(t)) ∥di∥ TFIDF : log(1+tcdi(t)) ∥di∥ log(1 + |di| df(t)) BM25 : (k+1)tfdi(t) k+tfdi(t) , tfdi(t) = tcdi(t) (1−b)+b |di| avgdl where tcdi(t) is the number of occurrences of keyword t in report i, ∥di∥denotes the Euclidean norm of the keyword weights of the report, |di| is the length of the report (number of the words in the report), avgdl is the average document length, and finally k and b are parameters. For them, we use the settings used in previous studies (Rekabsaz et al., 2016b) i.e. k = 1.2 and b = 0.65. In addition to the standard weighting schemes, we use state-of-the-art weighting methods in Information Retrieval (Rekabsaz et al., 2016b) which benefit directly from word embedding models: They exploit similarity values between words provided by the word embedding model into the weighting schemes by extending the weight of each lexicon keyword with its similar words: d tcdi(t) = tcdi(t) + X t′∈R(t) sim(t, t′)tcdi(t′) (3) where R(t) is the list of similar words to the keyword t, and sim(t, t′) is the Cosine similarity value between the vector representations of the words t and t′. As previously suggested by Rekabsaz et al. (2016a, 2017), we use the Cosine similarity function with threshold 0.70 for selecting the set R(t) of similar words. We define the extended versions of the standard weighting schemes as d TC, d TF, \ TFIDF, and \ BM25 by replacing tcdi(t) with d tcdi(t) in each of the schemes. The feature vector generated by the weights of the Lex or LexExt lexicons is highly sparse, as the number of dimensions is larger than the number of data-points. We therefore reduce the dimensions by applying Principle Component Analysis (PCA). Our initial experiments show 400 dimen1714 sion as the optimum by trying on a range of dimensions from 50 to 1000. Given the final feature vector x with l dimensions, we apply SVM as a well-known method for training both regression and classification methods. Support Vector Regression (Drucker et al., 1997) formulates the training as the following optimization problem: min w∈IRl 1 2 ∥w∥2+ C N N X i=1 max(0, ∥yi −f(xi; w)∥−ϵ) (4) Similar to previous studies (Tsai and Wang, 2014; Kogan et al., 2009), we set C = 1.0 and ϵ = 0.1. To solve the above problem, the function f can be re-parametrized in terms of a kernel function K with weights αi: f(xi; w) = N X i=1 αiK(xi, x) (5) The kernel can be considered as a (similarity) function between the feature vector of the document and vectors of all the other documents. Our initial experiments showed better performance of the Radial Basis Function (RBF) kernel in comparison to linear and cosine kernels and is therefore used in this paper. In addition, motivated by Moraes et al.(Moraes et al., 2013), we use of an Artificial Neural Network (ANN) algorithm to test the effectiveness of neural networks for automatic feature learning. We tried several neural network architectures with different regularization methods (early-stopping, regularization term, dropout). The best performing results were achieved with two hidden layers (400 and 500 nodes respectively), tanh for activation function, and learning rate of 0.001 in gradient decent with early stopping. However, the networks could not provide superior results than the SVM regressors. Therefore, for this report, we only report the SVM methods. 4.2 Market Features In addition to textual features, we define three features using the factual market data and historical prices—referred to as market features—as follows: Current Volatility is calculated on the window of one quartile before the issue date of the report: v[si−64,si]. GARCH (Bollerslev, 1986) is a common econometric time-series model used for predicting stock price volatility. We use a GARCH (1, 1) model, trained separately for each report on intra-day return prices. We use all price data available before the issue date of the report for fitting the model. The GARCH (1, 1) model used predicts the volatility of the next day by looking at the previous day’s volatility. When forecasting further than one day into the future one needs to use the model’s own predictions in order to be able to make predictions for more than one day ahead. When forecasting further into the future these conditional forecasts of the variance will converge to a value called unconditional variance. As our forecast period is one quarter, we will approximate the volatility of future quarters with the unconditional variance. Sector is the sector that the corresponding company of the report belongs to, namely energy (ene), basic industries (ind), finance (fin), technology (tech), miscellaneous (misc), consumer nondurables (n-dur), consumer durables (dur), capital goods (capt), consumer services (serv), public utilities (pub), and health care (hlth)1. The feature is converted to numerical representation using onehot encoding. 4.3 Feature Fusion To combine the text and market feature sets, the first approach, used also in previous studies ((Kogan et al., 2009; Wang et al., 2013)) is simply joining all the features in one feature space. In the context of multi-model learning, the method is referred to as early fusion. In contrast, late fusion approaches first learn a model on each feature set and then use/learn a meta model to combine their results. As our second approach, we use stacking (Wolpert, 1992), a special case of late fusion. In stacking, we first split the training set into two parts (70%-30% portions). Using the first portion, we train separate machine learning models for each of the text and market feature sets. Next, we predict labels of the second portion with the trained models and finally train another model to capture the combinations between the outputs of the base models. In our experiments, the final model is always trained with SVM with RBF kernel. Stacking is computationally inexpensive. How1We follow NASDAQ categorization of sectors. 1715 ever, due to the split of the training set, the base models or the meta model may suffer from lack of training data. A potential approach to learn both the feature sets in one model is the MKL method. The MKL algorithm (also called intermediate fusion (Noble et al., 2004)) extends the kernel of the SVM model by learning (simultaneous to the parameter learning) an optimum combination of several kernels. The MKL algorithm as formulated in Lanckriet et al. (2004) adds the following criterion to Eq. 5 for kernel learning: K∗= X i diKi where X i di = 1, di ≥0 (6) where Ki is a predefined kernel. G¨onen and Alpaydın (2011) mention two uses of MKL: learning the optimum kernel in SVM, and combining multiple modalities (feature sets) via each kernel. However, the optimization can be computationally challenging. We use the mklaren method (Straˇzar and Curk, 2016) which has linear complexity in the number of data instances and kernels. It has been shown to outperform recent multi kernel approximation approaches. We use RBF kernels for both the text and market feature sets. 5 Experiment Setup In this section, we first describe the data, followed by introducing the baselines. We report the parameters applied in various algorithms and describe the evaluation metrics. Dataset We download the reports of companies of the U.S. stock markets from 2006 to 2015 from the U.S. Securities and Exchange Commission (SEC) website2. We remove HTML tags and extract the text parts. We extract the Risk Factors section using term matching heuristics. Finally, the texts are stemmed using the Porter stemmer. We calculate the volatility values (Eq 1) and the volatility of the GARCH model based on the stock prices, collected from the Yahoo website. We filter the volatility values greater/smaller than the mean plus/minus three times the standard deviation of all the volatility values3. Baselines GARCH: although the GARCH model is of market factual information, we use 2https://www.sec.gov 3The complete dataset is available in http://ifs. tuwien.ac.at/˜admire/financialvolatility it as a baseline to compare the effectiveness of text-based methods with mainstream approaches. Market: uses all the market features. For both the GARCH and Market baselines, we use an SVM learner with RBF kernel. Wang et al. (2013): they use the Lex keyword set with TC weighting scheme and the SVM method. They combine the textual features with current volatility using the early fusion method. Tsai et al. (2014): similar to Wang et al. (2013), while they use the LexExt keyword set. Evaluation Metrics As a common metric in volatility prediction, we use the r2 metric (square of the correlation coefficient) for evaluation: r2 =   Pn i=1( ˆyi −¯ˆy)(yi −¯y) qPn i=1( ˆyi −¯ˆy)2pPn i=1(yi −¯y)2   2 (7) where ˆyi is the predicted value, yi denotes the labels and ¯y, their mean. The r2 metric indicates the proportion of variance in the labels explained by the prediction. The measure is close to 1 when the predicted values can explain a large proportion of the variability in the labels and 0 when it fails to explain the labels’ variabilities. An alternative metric, used in previous studies (Wang et al., 2013; Tsai and Wang, 2014; Kogan et al., 2009) is Mean Squared Error MSE = P i( ˆyi −yi)2/n. However, especially when comparing models, applied on different test sets (e.g. performance of first quartile with second quartile), r2 has better interpretability since it is independent of the scale of y. We use r2 in all the experiments while the MSE measure is reported only when the models are evaluated on the same test set. 6 Experiments and Results In this section, first we analyse the contents of the reports, followed by studying our sentiment analysis methods for volatility prediction. Finally, we investigate the effect of sentiment analysis of the reports in different industry sectors. 6.1 Content Analysis of 10-K Reports Let us start our experiment with observing changes in the feature vectors of the reports over the years. To compare them, we use the state-ofthe-art sentiment analysis method, introduced by Tsai and Wang (2014). We first represent the feature vector of each year by calculating the centroid 1716 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 0.8 0.4 0.0 0.4 0.8 (a) 2006 2007 2008 2009 2010 2011 2012 2013 2014 0.0 0.1 0.2 0.3 0.4 0.5 r2 (b) Figure 1: (a) Cosine similarity between the centroid vectors of the years. (b) Volatility prediction performance when using reports from the specified year to 2015 (element-wise mean) of the feature vectors of all reports published that year and then calculate the Cosine similarity of each pair of centroid vectors, for the years 2006–2015. Figure 1a shows the similarity heat-map for each pair of the years. We observe a high similarity between three ranges of years: 2006–2008, 2009–2011, and 2012–2015. These considerable differences between the centroid reports in years across these three groups hints at probable issues when using the data of the older years for the more recent ones. To validate this, we apply 5-fold cross validation, first on all the data (2006–2015), and then on smaller sets by dropping the oldest year i.e. the next subsets use the reports 2007–2015, 2008– 2015 and so forth. The results of the r2 measure are shown in Figure 1b. We observe that by dropping the oldest years one by one (from left to right in the figure), the performance starts improving. We argue that this improvement is due to the reduction of noise in data, noise caused by conceptual drifts in the reports as also mentioned by Dyer et al. (2016). In fact, although in machine learning in general using more data results in better generalization of the model and therefore better prediction, the reports of the older years introduce noise. As shown, the most coherent and largest data consists of the subset of the reports published between 2012 to 2015. This subset is also the most recent cluster and presumably more similar to the future reports. Therefore, in the following, we only use this subset, which consists of 3892 reports, belonging to 1323 companies. Table 1: Performance of sentiment analysis methods for the first year. Component Method Text Text+Market (r2) (MSE) (r2) (MSE) Weighting Schema (+Stacking) \ BM25 0.439 0.132 0.527 0.111 BM25 0.433 0.136 0.523 0.114 d T C 0.427 0.136 0.517 0.115 T C 0.425 0.137 0.521 0.114 \ T F IDF 0.301 0.166 0.502 0.118 T F IDF 0.264 0.189 0.497 0.119 d T F 0.218 0.190 0.495 0.120 T F 0.233 0.200 0.495 0.120 Feature Fusion (+ \ BM25) Stacking 0.527 0.111 MKL 0.488 0.126 Early Fusion 0.473 0.125 Table 2: Performance of the methods using 5-fold cross validation. Method (r2) (MSE) GARCH 0.280 0.170 Text Wang (2013) 0.345 0.154 Tsai (2014) 0.395 0.142 Our method 0.439 0.132 Market 0.485 0.122 Text+Market Wang (2013) 0.499 0.118 Tsai (2014) 0.484 0.122 Our method 0.527 0.111 6.2 Volatility Prediction Given the dataset of the 2012–2015 reports, we try all combinations of different term weighting schemes using the LexExt keyword set. All weighting schemes are then combined with the market features with the introduced fusion methods. The prediction is done with 5-fold cross validation. The averages of the results of the first four quartiles (first year) are reported in Table 1. To make showing the results tractable, we use the best fusion (stacking) for the weighting schemes and the best scheme ( \ BM25) for fusions. Regarding the weighting schemes, \ BM25, BM25, and d TC show the best results. In general, the extended schemes (with hat) improve upon their normal forms. For the feature fusion methods, stacking outperforms the other approaches in both evaluation measures. MKL however has better performance than early fusion while it has the highest computational complexity among the methods. Based on these results, as our best performing approach in the remainder of the paper, we use \ BM25 (with LexExt set), reduced to 400 dimensions and stacking as the fusion method. Table 2 summarizes the results of our best per1717 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 r2 Text+Market Text Market GARCH (a) CV 2013 2014 2015 0.0 0.1 0.2 0.3 0.4 0.5 0.6 r2 Text Text+Market (b) ene ind fin tech misc n-dur capt dur serv pub hlth 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 r2 Text Text+Market (c) Figure 2: (a) Performance of our approach on 8 quartiles using the Text and Text+Market feature sets. The dashed lines show the market-based baselines. (b) Performance of volatility prediction of each year given the past data. The hashed areas show corresponding baselines. (c) Performance per sector. Abbreviations are defined in Section 4.2 forming method compared with previously existing methods. Our method outperforms all state-ofthe-art methods both when using textual features only as well as a combination of textual and market features. Let us now take a closer look on the changes in the performance of the prediction in time. The results of 5-fold cross validation for both tasks on the dataset of the reports, published between 2012–2015 are shown in Figure 2a. The X-axes show eight quartiles after the publication date of the report. For comparison, the GARCH and only market features are depicted with dashed lines. As shown, the performance of the GARCH method as well as that using only market features (Market) decrease faster in the later quartiles since the historical prices used for prediction become less relevant as time goes by. Using only text features (Text), we see a roughly similar performance between the first four quartiles (first year), while the performance, in general, slightly decreases in the second year. By combining the textual and market features (Text+Market), we see a consistent improvement in comparison to each of them alone. In comparison to using only market features, the combination of the features shows more stable results in the later quartiles. These results support the informativeness of the 10-K reports to more effectively foreseen volatility in long-term windows. While the above experiments are based on cross-validation, for the sake of completeness it is noteworthy to consider the scenarios of realworld applications where the future prediction is based on past data. We therefore design three experiments by considering the reports published in 2013, 2014, and 2015 as test set and the reports published before each year as training set (only 2012, 2012–2013, and 2012–2014 respectively). The results of predicting the reports of each year together with the cross validation scenario (CV) are shown in Figure 2b. While the performance becomes slightly worse in the target years 2013 and 2015, in general the combination of textual and market features can explain approximately half of volatility in the financial system. 6.3 Sectors Corporations in the same sector share not only similar products or services but also risks and instability factors. Considering the sentiment of the financial system as a homogeneous body may neglect the specific factors of each sector. We therefore set out to investigate the existence and nature of these differences. We start by observing the prediction performance on different sectors: We use our method from the previous section, but split the test set across sectors and plot the results in Figure 2c. The hashed areas indicate the GARCH and Market baselines for the Text and Text+Market feature sets, respectively. We observe considerable differences between the performance of the sectors, especially when using only sentiment analysis methods (i.e. only text features). Given these differences and also the probable similarities between the risk factors of the reports in the same sector, a question immediately arises: can training different models for different sectors improve the performance of prediction? To answer it, for each sector, we train a model using only the subset of the reports in that sec1718 ene ind fin tech misc n-dur capt dur serv pub hlth 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 r2 Sector-agnostic Sector-specific General model (a) Text ene ind fin tech misc n-dur capt dur serv pub hlth 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 r2 Sector-agnostic Sector-specific General model (b) Text+Market Figure 3: Results when retraining on sector-specific subsets versus the general model and versus subsets of the same size but sector-agnostic. The hashed area in (a) indicates the GARCH and in (b) the Market baseline. Table 3: Number of reports per sectors ene ind hlth fin tech pub 187 160 305 847 408 217 n-dur dur capt serv misc 151 115 255 639 153 tor and use 5-fold validation to observe performance. We refer to these models as sector-specific in contrast to the general model, trained on all the data. Figures 3a and 3b compare their results: we can see that the sector-specific bars are lower than the general model ones. This is to some extent surprising, as one would expect that domainspecific training would improve the performance of sentiment analysis in text. However, we need to consider the size of the training set. By training on each sector we have reduced the size of our training sets to those reported in Table 3. To verify the effect of the size of training data, we train a sector-agnostic model for each sector. Each sector-agnostic model is trained by random sampling of a training set of the same size as the set available for its sector from all the reports, but evaluated–similar to sector-specific models–on the test set of the sector. Figures 3a and 3b also plot the results of the sector-agnostic models. The large performance differences between sector-agnostic and -specific show the existence of particular risk factors in each sector and their importance. Results also confirm the hypothesis that the data for training in each sector is simply too small, and as additional data is accumulated, we can further improve on the results by training on different sectors independently. We continue by examining some examples of essential terms in sectors. To address this, we have to train a linear regression method on all the reports of each sector, without using any dimensionality reduction. Linear regression without dimensionality reduction has the benefit of interpretability: the coefficient of each feature (i.e. term in the lexicon) can be seen as its importance with regards to volatility prediction. After training, we observe that some keywords e.g. crisis, or delist constantly have high coefficient values in the sector-specific as well as general model. However, some keywords are particularly weighted high in specificsector models. For instance, the keyword fire has a high coefficient in the energy sector, but very low in the others. The reason is due to the problem of ambiguity i.e. in the energy sector, fire is widely used to refer to explosion e.g. ‘fire and explosion hazards’ while in the lexicon, it is stemmed from firing and fired: the act of dismissing from a job. This later sense of word is however weighted as a low risk-sensitive keyword in the other sectors. Such an ambiguity can indeed be mitigated by sectorspecific models since the variety of the words’ senses are more restricted inside each sector. Another example is an interesting observation on the word beneficial. The word is introduced as a positive sentiment in the lexicon while it gains highly negative sentiments in some sectors (health care, and basic industries). Investigating in the reports, we observe the broad use of the expression ‘beneficial owner’ which is normally followed by riskfull sentences since the beneficial owners can potentially influence shareholders’ decision power. 1719 7 Conclusion In this work, we studied the sentiment of recent 10-K annual disclosures of companies in stock markets for forecasting volatility. Our bag-ofwords sentiment analysis approach benefits from state-of-the-art models in information retrieval which use word embeddings to extend the weight of the terms to the similar terms in the document. Additionally, we explored fusion methods to combine the text features with factual market features, achieved from historical prices i.e. GARCH prediction model, and current volatility. In both cases, our approach outperforms state-ofthe-art volatility prediction methods with 10-K reports and demonstrates the effectiveness of sentiment analysis in long-term volatility forecasting. In addition, we studied the characteristics of each individual sector with regard to risk-sensitive terms. Our analysis shows that reports in same sectors considerably share particular risk and instability factors. However, despite expectations, training different models on different sectors does not improve performance compared to the general model. We traced this to the size of the available data in each sector, and show that there are still benefits in considering sectors, which could be further explored in the future as more data becomes available. 8 Acknowledgement This paper follows work produced during the Young Scientists Summer Program (YSSP) 2016 at the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria. This work is funded by: Self-Optimizer (FFG 852624) in the EUROSTARS programme, funded by EUREKA, the BMWFW and the European Union, ADMIRE (P 25905-N23) by FWF, and the Austrian Ministry for Science, Research and Economy. Thanks to Joni Sayeler and Linus Wretblad for their contributions in the SelfOptimizer project. References Tim Bollerslev. 1986. Generalized autoregressive conditional heteroskedasticity. Journal of econometrics 31(3):307–327. John L Campbell, Hsinchun Chen, Dan S Dhaliwal, Hsin-min Lu, and Logan B Steele. 2014. The information content of mandatory risk factor disclosures in corporate filings. Review of Accounting Studies 19(1):396–455. Charlotte Christiansen, Maik Schmeling, and Andreas Schrimpf. 2012. A comprehensive look at financial volatility prediction by economic variables. Journal of Applied Econometrics 27(6):956–977. Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2015. Deep learning for event-driven stock prediction. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI’15). pages 2327–2333. Harris Drucker, Christopher JC Burges, Linda Kaufman, Alex Smola, Vladimir Vapnik, et al. 1997. Support vector regression machines. Advances in neural information processing systems 9:155–161. Travis Dyer, Mark H Lang, and Lorien Stice-Lawrence. 2016. The ever-expanding 10-k: Why are 10-ks getting so much longer (and does it matter)? Available at SSRN 2741682 . Robert F Engle. 1982. Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation. Econometrica: Journal of the Econometric Society pages 987–1007. Mehmet G¨onen and Ethem Alpaydın. 2011. Multiple kernel learning algorithms. Journal of Machine Learning Research 12(Jul):2211–2268. Siavash Kazemian, Shunan Zhao, and Gerald Penn. 2014. Evaluating sentiment analysis evaluation: A case study in securities trading. Proceedings of the Conference of the Association for Computational Linguistics (ACL) page 119. Shimon Kogan, Dimitry Levin, Bryan R Routledge, Jacob S Sagi, and Noah A Smith. 2009. Predicting risk from financial reports with regression. In Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics. pages 272–280. Gert RG Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, and Michael I Jordan. 2004. Learning the kernel matrix with semidefinite programming. Journal of Machine learning research 5(Jan):27–72. Feng Li. 2010. The information content of forwardlooking statements in corporate filings–a na¨ıve bayesian machine learning approach. Journal of Accounting Research 48(5):1049–1102. Hongquan Li and Yongmiao Hong. 2011. Financial volatility forecasting with range-based autoregressive volatility model. Finance Research Letters 8(2):69–76. Shouwei Liu and Yiu Kuen Tse. 2013. Estimation of monthly volatility: An empirical comparison of realized volatility, garch and acd-icv methods. Research Collection School Of Economics . 1720 Tim Loughran and Bill McDonald. 2011. When is a liability not a liability? textual analysis, dictionaries, and 10-ks. The Journal of Finance 66(1):35–65. Ronny Luss and Alexandre d’Aspremont. 2015. Predicting abnormal returns from news using text classification. Quantitative Finance 15(6):999–1012. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . Rodrigo Moraes, Jo˜aO Francisco Valiati, and Wilson P Gavi˜aO Neto. 2013. Document-level sentiment classification: An empirical comparison between svm and ann. Expert Systems with Applications 40(2):621–633. Thien Hai Nguyen and Kiyoaki Shirai. 2015. Topic modeling based sentiment analysis on social media for stock market prediction. In ACL. William Stafford Noble et al. 2004. Support vector machine applications in computational biology. Kernel methods in computational biology pages 71–92. Clemens Nopp and Allan Hanbury. 2015. Detecting risks in the banking system by sentiment analysis. Proceedings of the Conference of Empirical Methods in Natural Language Processing (EMNLP) pages 591–600. Navid Rekabsaz, Mihai Lupu, and Allan Hanbury. 2016a. Uncertainty in neural network word embedding: Exploration of threshold for similarity. arXiv preprint arXiv:1606.06086 . Navid Rekabsaz, Mihai Lupu, Allan Hanbury, and Guido Zuccon. 2016b. Generalizing translation models in the probabilistic relevance framework. Proceedings of ACM International Conference on Information and Knowledge Management (CIKM) . Navid Rekabsaz, Mihai Lupu, Allan Hanbury, and Guido Zuccon. 2017. Exploration of a threshold for similarity based on uncertainty in word embedding. In European Conference on IR Research (ECIR). Martin Straˇzar and Tomaˇz Curk. 2016. Learning the kernel matrix via predictive low-rank approximations. arXiv preprint arXiv:1601.04366 . Ming-Feng Tsai and Chuan-Ju Wang. 2014. Financial keyword expansion via continuous word vector representations. In Proceedings of the Conference of Empirical Methods in Natural Language Processing (EMNLP). pages 1453–1458. Chuan-Ju Wang, Ming-Feng Tsai, Tse Liu, and ChinTing Chang. 2013. Financial sentiment analysis for risk prediction. In Proceedings of the Joint Conference on Natural Language Processing (IJCNLP). pages 802–808. William Yang Wang and Zhenhao Hua. 2014. A semiparametric gaussian copula regression model for predicting financial risks from earnings calls. In ACL. David H Wolpert. 1992. Stacked generalization. Neural networks 5(2):241–259. Boyi Xie, Rebecca J Passonneau, and Leon Wu. 2013. Semantic Frames to Predict Stock Price Movement. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. 1721
2017
157
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1722–1731 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1158 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1722–1731 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1158 CANE: Context-Aware Network Embedding for Relation Modeling Cunchao Tu1,2∗, Han Liu3∗, Zhiyuan Liu1,2†, Maosong Sun1,2 1Department of Computer Science and Technology, State Key Lab on Intelligent Technology and Systems, National Lab for Information Science and Technology, Tsinghua University, China 2Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, China 3Northeastern University, China Abstract Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors. However, existing NE models aim to learn a fixed context-free embedding for each vertex and neglect the diverse roles when interacting with other vertices. In this paper, we assume that one vertex usually shows different aspects when interacting with different neighbor vertices, and should own different embeddings respectively. Therefore, we present ContextAware Network Embedding (CANE), a novel NE model to address this issue. CANE learns context-aware embeddings for vertices with mutual attention mechanism and is expected to model the semantic relationships between vertices more precisely. In experiments, we compare our model with existing NE models on three real-world datasets. Experimental results show that CANE achieves significant improvement than state-of-the-art methods on link prediction and comparable performance on vertex classification. The source code and datasets can be obtained from https://github.com/ thunlp/CANE. 1 Introduction Network embedding (NE), i.e., network representation learning (NRL), aims to map vertices of a network into a low-dimensional space according to their structural roles in the network. NE provides an efficient and effective way to represent ∗Indicates equal contribution †Corresponding Author: Z. Liu ([email protected]) and manage large-scale networks, alleviating the computation and sparsity issues of conventional symbol-based representations. Hence, NE is attracting many research interests in recent years (Perozzi et al., 2014; Tang et al., 2015; Grover and Leskovec, 2016), and achieves promising performance on many network analysis tasks including link prediction, vertex classification, and community detection. I am studying NLP problems, including syntactic parsing, machine translation and so on. My research focuses on typical NLP tasks, including word segmentation, tagging and syntactic parsing. I am a NLP researcher in machine translation, especially using deep learning models to improve machine translation. Figure 1: Example of a text-based information network. (Red, blue and green fonts represent concerns of the left user, right user and both respectively.) In real-world social networks, it is intuitive that one vertex may demonstrate various aspects when interacting with different neighbor vertices. For example, a researcher usually collaborates with various partners on diverse research topics (as illustrated in Fig. 1), a social-media user contacts with various friends sharing distinct interests, and a web page links to multiple pages for different purposes. However, most existing NE methods only arrange one single embedding vector to each vertex, and give rise to the following two invertible issues: (1) These methods cannot flexibly cope with the aspect transition of a vertex when interacting with different neighbors. (2) In these models, a vertex tends to force the embeddings of its 1722 neighbors close to each other, which may be not the case all the time. For example, the left user and right user in Fig. 1, share less common interests, but are learned to be close to each other since they both link to the middle person. This will accordingly make vertex embeddings indiscriminative. To address these issues, we aim to propose a Context-Aware Network Embedding (CANE) framework for modeling relationships between vertices precisely. More specifically, we present CANE on information networks, where each vertex also contains rich external information such as text, labels or other meta-data, and the significance of context is more critical for NE in this scenario. Without loss of generality, we implement CANE on text-based information networks in this paper, which can easily extend to other types of information networks. In conventional NE models, each vertex is represented as a static embedding vector, denoted as context-free embedding. On the contrary, CANE assigns dynamic embeddings to a vertex according to different neighbors it interacts with, named as context-aware embedding. Take a vertex u and its neighbor vertex v for example. The contextfree embedding of u remains unchanged when interacting with different neighbors. On the contrary, the context-aware embedding of u is dynamic when confronting different neighbors. When u interacting with v, their context embeddings concerning each other are derived from their text information, Su and Sv respectively. For each vertex, we can easily use neural models, such as convolutional neural networks (Blunsom et al., 2014; Johnson and Zhang, 2014; Kim, 2014) and recurrent neural networks (Kiros et al., 2015; Tai et al., 2015), to build context-free text-based embedding. In order to realize context-aware textbased embeddings, we introduce the selective attention scheme and build mutual attention between u and v into these neural models. The mutual attention is expected to guide neural models to emphasize those words that are focused by its neighbor vertices and eventually obtain contextaware embeddings. Both context-free embeddings and contextaware embeddings of each vertex can be efficiently learned together via concatenation using existing NE methods such as DeepWalk (Perozzi et al., 2014), LINE (Tang et al., 2015) and node2vec (Grover and Leskovec, 2016). We conduct experiments on three real-world datasets of different areas. Experimental results on link prediction reveal the effectiveness of our framework as compared to other state-of-the-art methods. The results suggest that context-aware embeddings are critical for network analysis, in particular for those tasks concerning about complicated interactions between vertices such as link prediction. We also explore the performance of our framework via vertex classification and case studies, which again confirms the flexibility and superiority of our models. 2 Related Work With the rapid growth of large-scale social networks, network embedding, i.e. network representation learning has been proposed as a critical technique for network analysis tasks. In recent years, there have been a large number of NE models proposed to learn efficient vertex embeddings (Tang and Liu, 2009; Cao et al., 2015; Wang et al., 2016; Tu et al., 2016a). For example, DeepWalk (Perozzi et al., 2014) performs random walks over networks and introduces an efficient word representation learning model, SkipGram (Mikolov et al., 2013a), to learn network embeddings. LINE (Tang et al., 2015) optimizes the joint and conditional probabilities of edges in large-scale networks to learn vertex representations. Node2vec (Grover and Leskovec, 2016) modifies the random walk strategy in DeepWalk into biased random walks to explore the network structure more efficiently. Nevertheless, most of these NE models only encode the structural information into vertex embeddings, without considering heterogeneous information accompanied with vertices in real-world social networks. To address this issue, researchers make great efforts to incorporate heterogeneous information into conventional NE models. For instance, Yang et al. (2015) present text-associated DeepWalk (TADW) to improve matrix factorization based DeepWalk with text information. Tu et al. (2016b) propose max-margin DeepWalk (MMDW) to learn discriminative network representations by utilizing labeling information of vertices. Chen et al. (2016) introduce groupenhanced network embedding (GENE) to integrate existing group information in NE. Sun et al. (2016) regard text content as a special kind 1723 of vertices, and propose context-enhanced network embedding (CENE) through leveraging both structural and textural information to learn network embeddings. To the best of our knowledge, all existing NE models focus on learning context-free embeddings, but ignore the diverse roles when a vertex interacts with others. In contrast, we assume that a vertex has different embeddings according to which vertex it interacts with, and propose CANE to learn context-aware vertex embeddings. 3 Problem Formulation We first give basic notations and definitions in this work. Suppose there is an information network G = (V, E, T), where V is the set of vertices, E ⊆V ×V are edges between vertices, and T denotes the text information of vertices. Each edge eu,v ∈E represents the relationship between two vertices (u, v), with an associated weight wu,v. Here, the text information of a specific vertex v ∈V is represented as a word sequence Sv = (w1, w2, . . . , wnv), where nv = |Sv|. NRL aims to learn a low-dimensional embedding v ∈Rd for each vertex v ∈V according to its network structure and associated information, e.g. text and labels. Note that, d ≪|V | is the dimension of representation space. Definition 1. Context-free Embeddings: Conventional NRL models learn context-free embedding for each vertex. It means the embedding of a vertex is fixed and won’t change with respect to its context information (i.e., another vertex it interacts with). Definition 2. Context-aware Embeddings: Different from existing NRL models that learn context-free embeddings, CANE learns various embeddings for a vertex according to its different contexts. Specifically, for an edge eu,v, CANE learns context-aware embeddings v(u) and u(v). 4 The Method 4.1 Overall Framework To take full use of both network structure and associated text information, we propose two types of embeddings for a vertex v, i.e., structurebased embedding vs and text-based embedding vt. Structure-based embedding can capture the information in the network structure, while textbased embedding can capture the textual meanings lying in the associated text information. With these embeddings, we can simply concatenate them and obtain the vertex embeddings as v = vs ⊕vt, where ⊕indicates the concatenation operation. Note that, the text-based embedding vt can be either context-free or context-aware, which will be introduced detailedly in section 4.4 and 4.5 respectively. When vt is context-aware, the overall vertex embeddings v will be context-aware as well. With above definitions, CANE aims to maximize the overall objective of edges as follows: L = X e∈E L(e). (1) Here, the objective of each edge L(e) consists of two parts as follows: L(e) = Ls(e) + Lt(e), (2) where Ls(e) denotes the structure-based objective and Lt(e) represents the text-based objective. In the following part, we give the detailed introduction to the two objectives respectively. 4.2 Structure-based Objective Without loss of generality, we assume the network is directed, as an undirected edge can be considered as two directed edges with opposite directions and equal weights. Thus, the structure-based objective aims to measure the log-likelihood of a directed edge using the structure-based embeddings as Ls(e) = wu,v log p(vs|us). (3) Following LINE (Tang et al., 2015), we define the conditional probability of v generated by u in Eq. (3) as p(vs|us) = exp(us · vs) P z∈V exp(us · zs). (4) 4.3 Text-based Objective Vertices in real-world social networks usually accompany with associated text information. Therefore, we propose the text-based objective to take advantage of these text information, as well as learn text-based embeddings for vertices. The text-based objective Lt(e) can be defined with various measurements. To be compatible with Ls(e), we define Lt(e) as follows: Lt(e) = α · Ltt(e) + β · Lts(e) + γ · Lst(e), (5) 1724 where α, β and γ control the weights of various parts, and Ltt(e) = wu,v log p(vt|ut), Lts(e) = wu,v log p(vt|us), Lst(e) = wu,v log p(vs|ut). (6) The conditional probabilities in Eq. (6) map the two types of vertex embeddings into the same representation space, but do not enforce them to be identical for the consideration of their own characteristics. Similarly, we employ softmax function for calculating the probabilities, as in Eq. (4). The structure-based embeddings are regarded as parameters, the same as in conventional NE models. But for text-based embeddings, we intend to obtain them from associated text information of vertices. Besides, the text-based embeddings can be obtained either in context-free ways or contextaware ones. In the following sections, we will give detailed introduction respectively. 4.4 Context-Free Text Embedding There has been a variety of neural models to obtain text embeddings from a word sequence, such as convolutional neural networks (CNN) (Blunsom et al., 2014; Johnson and Zhang, 2014; Kim, 2014) and recurrent neural networks (RNN) (Kiros et al., 2015; Tai et al., 2015). In this work, we investigate different neural networks for text modeling, including CNN, Bidirectional RNN (Schuster and Paliwal, 1997) and GRU (Cho et al., 2014), and employ the best performed CNN, which can capture the local semantic dependency among words. Taking the word sequence of a vertex as input, CNN obtains the text-based embedding through three layers, i.e. looking-up, convolution and pooling. Looking-up. Given a word sequence S = (w1, w2, . . . , wn), the looking-up layer transforms each word wi ∈S into its corresponding word embedding wi ∈Rd′ and obtains embedding sequence as S = (w1, w2, . . . , wn). Here, d′ indicates the dimension of word embeddings. Convolution. After looking-up, the convolution layer extracts local features of input embedding sequence S. To be specific, it performs convolution operation over a sliding window of length l using a convolution matrix C ∈Rd×(l×d′) as follows: xi = C · Si:i+l−1 + b, (7) where Si:i+l−1 denotes the concatenation of word embeddings within the i-th window and b is the bias vector. Note that, we add zero padding vectors (Hu et al., 2014) at the edge of the sentence. Max-pooling. To obtain the text embedding vt, we operate max-pooling and non-linear transformation over {xi 0, . . . , xi n} as follows: ri = tanh(max(xi 0, . . . , xi n)), (8) At last, we encode the text information of a vertex with CNN and obtain its text-based embedding vt = [r1, . . . , rd]T . As vt is irrelevant to the other vertices it interacts with, we name it as contextfree text embedding. 4.5 Context-Aware Text Embedding Text Description Text Description Convolutional Unit Convolutional Unit u v P Q A tanh(PTAQ) Row-pooling + softmax Column-pooling + softmax ap aq ut (v)=P·ap vt (u)=Q·aq F Edge Text Embedding Figure 2: An illustration of context-aware text embedding. As stated before, we assume that a specific vertex plays different roles when interacting with others vertices. In other words, each vertex should have its own points of focus about a specific vertex, which leads to its context-aware text embeddings. To achieve this, we employ mutual attention to obtain context-aware text embedding. It enables the pooling layer in CNN to be aware of the vertex pair in an edge, in a way that text information from a vertex can directly affect the text embedding of the other vertex, and vice versa. 1725 In Fig. 2, we give an illustration of the generating process of context-aware text embedding. Given an edge eu,v with two corresponding text sequences Su and Sv, we can get the matrices P ∈Rd×m and Q ∈Rd×n through convolution layer. Here, m and n represent the lengths of Su and Sv respectively. By introducing an attentive matrix A ∈Rd×d, we compute the correlation matrix F ∈Rm×n as follows: F = tanh(PT AQ). (9) Note that, each element Fi,j in F represents the pair-wise correlation score between two hidden vectors, i.e., Pi and Qj. After that, we conduct pooling operations along rows and columns of F to generate the importance vectors, named as row-pooling and column pooling respectively. According to our experiments, mean-pooling performs better than max-pooling. Thus, we employ mean-pooling operation as follows: gp i = mean(Fi,1, . . . , Fi,n), gq i = mean(F1,i, . . . , Fm,i). (10) The importance vectors of P and Q are obtained as gp = [gp 1, . . . , gp m]T and gq = [gq 1, . . . , gq n]T . Next, we employ softmax function to transform importance vectors gp and gq to attention vectors ap and aq. For instance, the i-th element of ap is formalized as follows: ap i = exp(gp i ) P j∈[1,m] exp(gp j ). (11) At last, the context-aware text embeddings of u and v are computed as ut (v) = Pap, vt (u) = Qaq. (12) Now, given an edge (u, v), we can obtain the context-aware embeddings of vertices with their structure embeddings and context-aware text embeddings as u(v) = us⊕ut (v) and v(u) = vs⊕vt (u). 4.6 Optimization of CANE According to Eq. (3) and Eq. (6), CANE aims to maximize several conditional probabilities between u ∈{us, ut (v)} and v ∈{vs, vt (u)}. It is intuitive that optimizing the conditional probability using softmax function is computationally expensive. Thus, we employ negative sampling (Mikolov et al., 2013b) and transform the objective into the following form: log σ(uT ·v)+ k X i=1 Ez∼P(v)[log σ(−uT ·z)], (13) where k is the number of negative samples and σ represents the sigmoid function. P(v) ∝dv3/4 denotes the distribution of vertices, where dv is the out-degree of v. Afterward, we employ Adam (Kingma and Ba, 2015) to optimize the transformed objective. Note that, CANE is exactly capable of zero-shot scenarios, by generating text embeddings of new vertices with well-trained CNN. 5 Experiments To investigate the effectiveness of CANE on modeling relationships between vertices, we conduct experiments of link prediction on several realworld datasets. Besides, we also employ vertex classification to verify whether context-aware embeddings of a vertex can compose a high-quality context-free embedding in return. 5.1 Datasets Datasets Cora HepTh Zhihu #Vertices 2, 277 1, 038 10, 000 #Edges 5, 214 1, 990 43, 894 #Labels 7 − − Table 1: Statistics of Datasets. We select three real-world network datasets as follows: Cora1 is a typical paper citation network constructed by (McCallum et al., 2000). After filtering out papers without text information, there are 2, 277 machine learning papers in this network, which are divided into 7 categories. HepTh2 (High Energy Physics Theory) is another citation network from arXiv3 released by (Leskovec et al., 2005). We filter out papers without abstract information and retain 1, 038 papers at last. 1https://people.cs.umass.edu/∼mccallum/data.html 2https://snap.stanford.edu/data/cit-HepTh.html 3https://arxiv.org/ 1726 %Training edges 15% 25% 35% 45% 55% 65% 75% 85% 95% MMB 54.7 57.1 59.5 61.9 64.9 67.8 71.1 72.6 75.9 DeepWalk 56.0 63.0 70.2 75.5 80.1 85.2 85.3 87.8 90.3 LINE 55.0 58.6 66.4 73.0 77.6 82.8 85.6 88.4 89.3 node2vec 55.9 62.4 66.1 75.0 78.7 81.6 85.9 87.3 88.2 Naive Combination 72.7 82.0 84.9 87.0 88.7 91.9 92.4 93.9 94.0 TADW 86.6 88.2 90.2 90.8 90.0 93.0 91.0 93.4 92.7 CENE 72.1 86.5 84.6 88.1 89.4 89.2 93.9 95.0 95.9 CANE (text only) 78.0 80.5 83.9 86.3 89.3 91.4 91.8 91.4 93.3 CANE (w/o attention) 85.8 90.5 91.6 93.2 93.9 94.6 95.4 95.1 95.5 CANE 86.8 91.5 92.2 93.9 94.6 94.9 95.6 96.6 97.7 Table 2: AUC values on Cora. (α = 1.0, β = 0.3, γ = 0.3) Zhihu4 is the largest online Q&A website in China. Users follow each other and answer questions on this site. We randomly crawl 10, 000 active users from Zhihu, and take the descriptions of their concerned topics as text information. The detailed statistics are listed in Table 1. 5.2 Baselines We employ the following methods as baselines: Structure-only: MMB (Airoldi et al., 2008) (Mixed Membership Stochastic Blockmodel) is a conventional graphical model of relational data. It allows each vertex to randomly select a different ”topic” when forming an edge. DeepWalk (Perozzi et al., 2014) performs random walks over networks and employ Skip-Gram model (Mikolov et al., 2013a) to learn vertex embeddings. LINE (Tang et al., 2015) learns vertex embeddings in large-scale networks using first-order and second-order proximities. Node2vec (Grover and Leskovec, 2016) proposes a biased random walk algorithm based on DeepWalk to explore neighborhood architecture more efficiently. Structure and Text: Naive Combination: We simply concatenate the best-performed structure-based embeddings with CNN based embeddings to represent the vertices. TADW (Yang et al., 2015) employs matrix factorization to incorporate text features of vertices into network embeddings. CENE (Sun et al., 2016) leverages both structure and textural information by regarding text content as a special kind of vertices, and optimizes the probabilities of heterogeneous links. 4https://www.zhihu.com/ 5.3 Evaluation Metrics and Experiment Settings For link prediction, we adopt a standard evaluation metric AUC (Hanley and McNeil, 1982), which represents the probability that vertices in a random unobserved link are more similar than those in a random nonexistent link. For vertex classification, we employ L2regularized logistic regression (L2R-LR) (Fan et al., 2008) to train classifiers, and evaluate the classification accuracies of various methods. To be fair, we set the embedding dimension to 200 for all methods. In LINE, we set the number of negative samples to 5; we learn the 100 dimensional first-order and second-order embeddings respectively, and concatenate them to form the 200 dimensional embeddings. In node2vec, we employ grid search and select the best-performed hyper-parameters for training. We also apply grid search to set the hyper-parameters α, β and γ in CANE. Besides, we set the number of negative samples k to 1 in CANE to speed up the training process. To demonstrate the effectiveness of considering attention mechanism and two types of objectives in Eqs. (3) and (6), we design three versions of CANE for evaluation, i.e., CANE with text only, CANE without attention and CANE. 5.4 Link Prediction As shown in Table 2, Table 3 and Table 4, we evaluate the AUC values while removing different ratios of edges on Cora, HepTh and Zhihu respectively. Note that, when we only keep 5% edges for training, most vertices are isolated, which results in the poor and meaningless performance of all the methods. Thus, we omit the results under this training ratio. From these tables, we have the following observations: 1727 %Training edges 15% 25% 35% 45% 55% 65% 75% 85% 95% MMB 54.6 57.9 57.3 61.6 66.2 68.4 73.6 76.0 80.3 DeepWalk 55.2 66.0 70.0 75.7 81.3 83.3 87.6 88.9 88.0 LINE 53.7 60.4 66.5 73.9 78.5 83.8 87.5 87.7 87.6 node2vec 57.1 63.6 69.9 76.2 84.3 87.3 88.4 89.2 89.2 Naive Combination 78.7 82.1 84.7 88.7 88.7 91.8 92.1 92.0 92.7 TADW 87.0 89.5 91.8 90.8 91.1 92.6 93.5 91.9 91.7 CENE 86.2 84.6 89.8 91.2 92.3 91.8 93.2 92.9 93.2 CANE (text only) 83.8 85.2 87.3 88.9 91.1 91.2 91.8 93.1 93.5 CANE (w/o attention) 84.5 89.3 89.2 91.6 91.1 91.8 92.3 92.5 93.6 CANE 90.0 91.2 92.0 93.0 94.2 94.6 95.4 95.7 96.3 Table 3: AUC values on HepTh. (α = 0.7, β = 0.2, γ = 0.2) %Training edges 15% 25% 35% 45% 55% 65% 75% 85% 95% MMB 51.0 51.5 53.7 58.6 61.6 66.1 68.8 68.9 72.4 DeepWalk 56.6 58.1 60.1 60.0 61.8 61.9 63.3 63.7 67.8 LINE 52.3 55.9 59.9 60.9 64.3 66.0 67.7 69.3 71.1 node2vec 54.2 57.1 57.3 58.3 58.7 62.5 66.2 67.6 68.5 Naive Combination 55.1 56.7 58.9 62.6 64.4 68.7 68.9 69.0 71.5 TADW 52.3 54.2 55.6 57.3 60.8 62.4 65.2 63.8 69.0 CENE 56.2 57.4 60.3 63.0 66.3 66.0 70.2 69.8 73.8 CANE (text only) 55.6 56.9 57.3 61.6 63.6 67.0 68.5 70.4 73.5 CANE (w/o attention) 56.7 59.1 60.9 64.0 66.1 68.9 69.8 71.0 74.3 CANE 56.8 59.3 62.9 64.5 68.9 70.4 71.4 73.6 75.4 Table 4: AUC values on Zhihu. (α = 1.0, β = 0.3, γ = 0.3) (1) Our proposed CANE consistently achieves significant improvement comparing to all the baselines on all different datasets and different training ratios. It indicates the effectiveness of CANE when applied to link prediction task, and verifies that CANE has the capability of modeling relationships between vertices precisely. (2) What calls for special attention is that, both CENE and TADW exhibit unstable performance under various training ratios. Specifically, CENE performs poorly under small training ratios, because it reserves much more parameters (e.g., convolution kernels and word embeddings) than TADW, which need more data for training. Different from CENE, TADW performs much better under small training ratios, because DeepWalk based methods can explore the sparse network structure well through random walks even with limited edges. However, it achieves poor performance under large ones, as its simplicity and the limitation of bag-of-words assumption. On the contrary, CANE has a stable performance in various situations. It demonstrates the flexibility and robustness of CANE. (3) By introducing attention mechanism, the learnt context-aware embeddings obtain considerable improvements than the ones without attention. It verifies our assumption that a specific vertex should play different roles when interacting with other vertices, and thus benefits the relevant link prediction task. To summarize, all the above observations demonstrate that CANE can learn high-quality context-aware embeddings, which are conducive to estimating the relationship between vertices precisely. Moreover, the experimental results on link prediction task state the effectiveness and robustness of CANE. 5.5 Vertex Classification In CANE, we obtain various embeddings of a vertex according to the vertex it connects to. It’s intuitive that the obtained context-aware embeddings are naturally applicable to link prediction task. However, network analysis tasks, such as vertex classification and clustering, require a global embedding, rather than several context-aware embeddings for each vertex. To demonstrate the capability of CANE to solve these issues, we generate the global embedding of a vertex u by simply averaging all the context1728 aware embeddings as follows: u = 1 N X (u,v)|(v,u)∈E u(v), where N indicates the number of context-aware embeddings of u. 50 60 70 80 90 100 MMB DeepWalk LINE node2vec NC TADW CENE CANE(text only) CANE(w/0 attention) CANE Accuracy (× 100) Figure 3: Vertex classification results on Cora. With the generated global embeddings, we conduct 2-fold cross-validation and report the average accuracy of vertex classification on Cora. As shown in Fig. 3, we observe that: (1) CANE achieves comparable performance with state-of-the-art model CENE. It states that the learnt context-aware embeddings can transform into high-quality context-free embeddings through simple average operation, which can be further employed to other network analysis tasks. (2) With the introduction of mutual attention mechanism, CANE has an encouraging improvement than the one without attention, which is in accordance with the results of link prediction. It denotes that CANE is flexible to various network analysis tasks. 5.6 Case Study To demonstrate the significance of mutual attention on selecting meaningful features from text information, we visualize the heat maps of two vertex pairs in Fig. 4. Note that, every word in this figure accompanies with various background colors. The stronger the background color is, the larger the weight of this word is. The weight of each word is calculated according to the attention weights as follows. For each vertex pair, we can get the attention weight of each convolution window according to Eq. (11). To obtain the weights of words, we assign the attention weight to each word in this window, and add the attention weights of a word together as its final weight. Figure 4: Visualizations of mutual attention. The proposed attention mechanism makes the relations between vertices explicit and interpretable. We select three connected vertices in Cora for example, denoted as A, B and C. From Fig. 4, we observe that, though there exists citation relations with identical paper A, paper B and C concern about different parts of A. The attention weights over A in edge #1 are assigned to “reinforcement learning”. On the contrary, the weights in edge #2 are assigned to “machine learning’”, “supervised learning algorithms” and “complex stochastic models”. Moreover, all these key elements in A can find corresponding words in B and C. It’s intuitive that these key elements give an exact explanation of the citation relations. The discovered significant correlations between vertex pairs reflect the effectiveness of mutual attention mechanism, as well as the capability of CANE for modeling relations precisely. 6 Conclusion and Future Work In this paper, we propose the concept of ContextAware Network Embedding (CANE) for the first 1729 time, which aims to learn various context-aware embeddings for a vertex according to the neighbors it interacts with. Specifically, we implement CANE on text-based information networks with proposed mutual attention mechanism, and conduct experiments on several real-world information networks. Experimental results on link prediction demonstrate that CANE is effective for modeling the relationship between vertices. Besides, the learnt context-aware embeddings can compose high-quality context-free embeddings. We will explore the following directions in future: (1) We have investigated the effectiveness of CANE on text-based information networks. In future, we will strive to implement CANE on a wider variety of information networks with multi-modal data, such as labels, images and so on. (2) CANE encodes latent relations between vertices into their context-aware embeddings. Furthermore, there usually exist explicit relations in social networks (e.g., families, friends and colleagues relations between social network users), which are expected to be critical to NE. Thus, we want to explore how to incorporate and predict these explicit relations between vertices in NE. Acknowledgements This work is supported by the 973 Program (No. 2014CB340501), the National Natural Science Foundation of China (NSFC No. 61572273, 61532010, 61661146007), and Tsinghua University Initiative Scientific Research Program (20151080406). References Edoardo M Airoldi, David M Blei, Stephen E Fienberg, and Eric P Xing. 2008. Mixed membership stochastic blockmodels. JMLR 9(Sep):1981–2014. Phil Blunsom, Edward Grefenstette, and Nal Kalchbrenner. 2014. A convolutional neural network for modelling sentences. In Proceedings of ACL. Shaosheng Cao, Wei Lu, and Qiongkai Xu. 2015. Grarep: Learning graph representations with global structural information. In Proceedings of CIKM. pages 891–900. Jifan Chen, Qi Zhang, and Xuanjing Huang. 2016. Incorporate group information to enhance network embedding. In Proceedings of CIKM. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of EMNLP. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. JMLR 9:1871– 1874. Aditya Grover and Jure Leskovec. 2016. Node2vec: Scalable feature learning for networks. In Proceedings of KDD. James A Hanley and Barbara J McNeil. 1982. The meaning and use of the area under a receiver operating characteristic (roc) curve. Radiology 143(1):29– 36. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Proceedings of NIPS. pages 2042–2050. Rie Johnson and Tong Zhang. 2014. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058 . Yoon Kim. 2014. Convolutional neural networks for sentence classification. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Proceedings of NIPS. pages 3294–3302. Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. 2005. Graphs over time: densification laws, shrinking diameters and possible explanations. In Proceedings of KDD. pages 177–187. Andrew McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. 2000. Automating the construction of internet portals with machine learning. Information Retrieval Journal 3:127–163. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of ICIR. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS. pages 3111–3119. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of KDD. pages 701–710. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673–2681. 1730 Xiaofei Sun, Jiang Guo, Xiao Ding, and Ting Liu. 2016. A general framework for content-enhanced network representation learning. arXiv preprint arXiv:1610.02906 . Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of ACL. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In Proceedings of WWW. pages 1067–1077. Lei Tang and Huan Liu. 2009. Relational learning via latent social dimensions. In Proceedings of SIGKDD. pages 817–826. Cunchao Tu, Hao Wang, Xiangkai Zeng, Zhiyuan Liu, and Maosong Sun. 2016a. Community-enhanced network representation learning for network analysis. arXiv preprint arXiv:1611.06645 . Cunchao Tu, Weicheng Zhang, Zhiyuan Liu, and Maosong Sun. 2016b. Max-margin deepwalk: Discriminative learning of network representation. In Proceedings of IJCAI. Daixin Wang, Peng Cui, and Wenwu Zhu. 2016. Structural deep network embedding. In Proceedings of KDD. Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Y Chang. 2015. Network representation learning with rich text information. In Proceedings of IJCAI. 1731
2017
158
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1732–1744 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1159 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1732–1744 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1159 Universal Dependencies Parsing for Colloquial Singaporean English Hongmin Wang†, Yue Zhang†, GuangYong Leonard Chan‡, Jie Yang†, Hai Leong Chieu‡ † Singapore University of Technology and Design {hongmin wang, yue zhang}@sutd.edu.sg jie [email protected] ‡ DSO National Laboratories, Singapore {cguangyo, chaileon}@dso.org.sg Abstract Singlish can be interesting to the ACL community both linguistically as a major creole based on English, and computationally for information extraction and sentiment analysis of regional social media. We investigate dependency parsing of Singlish by constructing a dependency treebank under the Universal Dependencies scheme, and then training a neural network model by integrating English syntactic knowledge into a state-ofthe-art parser trained on the Singlish treebank. Results show that English knowledge can lead to 25% relative error reduction, resulting in a parser of 84.47% accuracies. To the best of our knowledge, we are the first to use neural stacking to improve cross-lingual dependency parsing on low-resource languages. We make both our annotation and parser available for further research. 1 Introduction Languages evolve temporally and geographically, both in vocabulary as well as in syntactic structures. When major languages such as English or French are adopted in another culture as the primary language, they often mix with existing languages or dialects in that culture and evolve into a stable language called a creole. Examples of creoles include the French-based Haitian Creole, and Colloquial Singaporean English (Singlish) (MianLian and Platt, 1993), an English-based creole. While the majority of the natural language processing (NLP) research attention has been focused on the major languages, little work has been done on adapting the components to creoles. One notable body of work originated from the featured translation task of the EMNLP 2011 Workshop on Statistical Machine Translation (WMT11) to translate Haitian Creole SMS messages sent during the 2010 Haitian earthquake. This work highlights the importance of NLP tools on creoles in crisis situations for emergency relief (Hu et al., 2011; Hewavitharana et al., 2011). Singlish is one of the major languages in Singapore, with borrowed vocabulary and grammars1 from a number of languages including Malay, Tamil, and Chinese dialects such as Hokkien, Cantonese and Teochew (Leimgruber, 2009, 2011), and it has been increasingly used in written forms on web media. Fluent English speakers unfamiliar with Singlish would find the creole hard to comprehend (Harada, 2009). Correspondingly, fundamental English NLP components such as POS taggers and dependency parsers perform poorly on such Singlish texts as shown in Table 2 and 4. For example, Seah et al. (2015) adapted the Socher et al. (2013) sentiment analysis engine to the Singlish vocabulary, but failed to adapt the parser. Since dependency parsers are important for tasks such as information extraction (Miwa and Bansal, 2016) and discourse parsing (Li et al., 2015), this hinders the development of such downstream applications for Singlish in written forms and thus makes it crucial to build a dependency parser that can perform well natively on Singlish. To address this issue, we start with investigating the linguistic characteristics of Singlish and specifically the causes of difficulties for understanding Singlish with English syntax. We found that, despite the obvious attribute of inheriting a large portion of basic vocabularies and grammars from English, Singlish not only imports terms from regional languages and dialects, its lexical 1We follow Leimgruber (2011) in using “grammar” to describe “syntactic constructions” and we do not differentiate the two expressions in this paper. 1732 Singlish dependency parser trained with small Singlish treebank English syntactic and semantic knowledge learnt from large treebank Singlish sentences Singlish dependency trees Figure 1: Overall model diagram semantics and syntax also deviate significantly from English (Leimgruber, 2009, 2011). We categorize the challenges and formalize their interpretation using Universal Dependencies (Nivre et al., 2016), which extends to the creation of a Singlish dependency treebank with 1,200 sentences. Based on the intricate relationship between Singlish and English, we build a Singlish parser by leveraging knowledge of English syntax as a basis. This overall approach is illustrated in Figure 1. In particular, we train a basic Singlish parser with the best off-the-shelf neural dependency parsing model using biaffine attention (Dozat and Manning, 2017), and improve it with knowledge transfer by adopting neural stacking (Chen et al., 2016; Zhang and Weiss, 2016) to integrate the English syntax. Since POS tags are important features for dependency parsing (Chen and Manning, 2014; Dyer et al., 2015), we train a POS tagger for Singlish following the same idea by integrating English POS knowledge using neural stacking. Results show that English syntax knowledge brings 51.50% and 25.01% relative error reduction on POS tagging and dependency parsing respectively, resulting in a Singlish dependency parser with 84.47% unlabeled attachment score (UAS) and 77.76% labeled attachment score (LAS). We make our Singlish dependency treebank, the source code for training a dependency parser and the trained model for the parser with the best performance freely available online2. 2https://github.com/wanghm92/Sing_Par 2 Related Work Neural networks have led to significant advance in the performance for dependency parsing, including transition-based parsing (Chen and Manning, 2014; Zhou et al., 2015; Weiss et al., 2015; Dyer et al., 2015; Ballesteros et al., 2015; Andor et al., 2016), and graph-based parsing (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017). In particular, the biaffine attention method of Dozat and Manning (2017) uses deep bi-directional long short-term memory (bi-LSTM) networks for highorder non-linear feature extraction, producing the highest-performing graph-based English dependency parser. We adopt this model as the basis for our Singlish parser. Our work belongs to a line of work on transfer learning for parsing, which leverages English resources in Universal Dependencies to improve the parsing accuracies of low-resource languages (Hwa et al., 2005; Cohen and Smith, 2009; Ganchev et al., 2009). Seminal work employed statistical models. McDonald et al. (2011) investigated delexicalized transfer, where word-based features are removed from a statistical model for English, so that POS and dependency label knowledge can be utilized for training a model for lowresource language. Subsequent work considered syntactic similarities between languages for better feature transfer (T¨ackstr¨om et al., 2012; Naseem et al., 2012; Zhang and Barzilay, 2015). Recently, a line of work leverages neural network models for multi-lingual parsing (Guo et al., 2015; Duong et al., 2015; Ammar et al., 2016). The basic idea is to map the word embedding spaces between different languages into the same vector space, by using sentence-aligned bilingual data. This gives consistency in tokens, POS and dependency labels thanks to the availability of Universal Dependencies (Nivre et al., 2016). Our work is similar to these methods in using a neural network model for knowledge sharing between different languages. However, ours is different in the use of a neural stacking model, which respects the distributional differences between Singlish and English words. This empirically gives higher accuracies for Singlish. Neural stacking was previously used for cross-annotation (Chen et al., 2016) and crosstask (Zhang and Weiss, 2016) joint-modelling on monolingual treebanks. To the best of our knowledge, we are the first to employ it on cross-lingual 1733 feature transfer from resource-rich languages to improve dependency parsing for low-resource languages. Besides these three dimensions in dealing with heterogeneous text data, another popular area of research is on the topic of domain adaption, which is commonly associated with crosslingual problems (Nivre et al., 2007). While this large strand of work is remotely related to ours, we do not describe them in details. Unsupervised rule-based approaches also offer an competitive alternative for cross-lingual dependency parsing (Naseem et al., 2010; Gillenwater et al., 2010; Gelling et al., 2012; Søgaard, 2012a,b; Mart´ınez Alonso et al., 2017), and recently been benchmarked for the Universal Dependencies formalism by exploiting the linguistic constraints in the Universal Dependencies to improve the robustness against error propagation and domain adaption (Mart´ınez Alonso et al., 2017). However, we choose a data-driven supervised approach given the relatively higher parsing accuracy owing to the availability of resourceful treebanks from the Universal Dependencies project. 3 Singlish Dependency Treebank 3.1 Universal Dependencies for Singlish Since English is the major genesis of Singlish, we choose English as the source of lexical feature transfer to assist Singlish dependency parsing. Universal Dependencies provides a set of multilingual treebanks with cross-lingually consistent dependency-based lexicalist annotations, designed to aid development and evaluation for cross-lingual systems, such as multilingual parsers (Nivre et al., 2016). The current version of Universal Dependencies comprises not only major treebanks for 47 languages but also their siblings for domain-specific corpora and dialects. With the aligned initiatives for creating transfer-learning-friendly treebanks, we adopt the Universal Dependencies protocol for constructing the Singlish dependency treebank, both as a new resource for the low-resource languages and to facilitate knowledge transfer from English. On top of the general Universal Dependencies guidelines, English-specific dependency relation definitions including additional subtypes are employed as the default standards for annotating the Singlish dependency treebank, unless augmented or redefined when necessary. The latest English UD English Singlish Sentences Words Sentences Words Train 12,543 204,586 900 8,221 Dev 2,002 25,148 150 1,384 Test 2,077 25,096 150 1,381 Table 1: Division of training, development, and test sets for Singlish Treebank corpus in Universal Dependencies v1.43 collection is constructed from the English Web Treebank (Bies et al., 2012), comprising of web media texts, which potentially smooths the knowledge transfer to our target Singlish texts in similar domains. The statistics of this dataset, from which we obtain English syntactic knowledge, is shown in Table 1 and we refer to this corpus as UD-Eng. This corpus uses 47 dependency relations and we show below how to conform to the same standard while adapting to unique Singlish grammars. 3.2 Challenges and Solutions for Annotating Singlish The deviations of Singlish from English come from both the lexical and the grammatical levels (Leimgruber, 2009, 2011), which bring challenges for analysis on Singlish using English NLP tools. The former involves imported vocabularies from the first languages of the local people and the latter can be represented by a set of relatively localized features which collectively form 5 unique grammars of Singlish according to Leimgruber (2011). We find empirically that all these deviations can be accommodated by applying the existing English dependency relation definitions while ensuring consistency with the annotations in other non-English UD treebanks, which are explained with examples as follows. Imported vocabulary: Singlish borrows a number of words and expressions from its nonEnglish origins (Leimgruber, 2009, 2011), such as “Kiasu”, which originates from Hokkien meaning “very anxious not to miss an opportunity”.4 These imported terms often constitute out-of-vocabulary (OOV) words with respect to a standard English treebank and result in difficulties for using English-trained tools on Singlish. All borrowed words are annotated based on their usages in Singlish, which mainly inherit the POS from their genesis languages. Table A4 in Appendix A 3Only guidelines for Universal Dependencies v2 but not the English corpus is available when this work is completed. 4Definition by the Oxford living Dictionaries for English. 1734 (1) Drive this car sure draw looks . root det dobj csubj advmod dobj punct (2) SG where got attap chu ? root nsubj advmod dobj compound punct (3) Inside tent can not see leh ! root nmod aux neg discourse punct case (4) U betting more downside from here ? root nsubj dobj amod case nmod punct (5) Hope can close 22 today . root ccomp aux dobj nmod:tmod punct (6) Best to makan all , tio boh ? root mark xcomp dobj punct neg discourse punct (7) I never get it free one ! root advmod nsubj dobj xcomp discourse punct Figure 2: Unique Singlish grammars. (Arcs represent dependencies, pointing from the head to the dependent, with the dependency relation label right on top of the arc) summarizes all borrowed terms in our treebank. Topic-prominence: This type of sentences start with establishing its topic, which often serves as the default one that the rest of the sentence refers to, and they typically employ an object-subjectverb sentence structure (Leimgruber, 2009, 2011). In particular, three subtypes of topic-prominence are observed in the Singlish dependency treebank and their annotations are addressed as follows: First, topics framed as clausal arguments at the beginning of the sentence are labeled as “csubj” (clausal subject), as shown by “Drive this car” of (1) in Figure 2, which is consistent with the dependency relations in its Chinese translation. Second, noun phrases used to modify the predicate with the absence of a preposition is regarded as a “nsubj” (nominal subject). Similarly, this is a common order of words used in Chinese and one example is the “SG” of (2) in Figure 2. Third, prepositional phrases moved in front are still treated as “nmod” (nominal modifier) of their intended heads, following the exact definition but as a Singlish-specific form of exemplification, as shown by the “Inside tent” of (3) in Figure 2. Although the “dislocated” (dislocated elements) relation in UD is also used for preposed elements, but it captures the ones “that do not fulfill the usual core grammatical relations of a sentence” and “not for a topic-marked noun that is also the subject of the sentence” (Nivre et al., 2016). In these three scenarios, the topic words or phrases are in relatively closer grammatical relations to the predicate, as subjects or modifiers. Copula deletion: Imported from the corresponding Chinese sentence structure, this copula verb is often optional and even deleted in Singlish, which is one of its diagnostic characteristics (Leimgruber, 2009, 2011). In UD-Eng standards, predicative “be” is the only verb used as a copula and it often depends on its complement to avoid copular head. This is explicitly designed in UD to promote parallelism for zero-copula phenomenon in languages such as Russian, Japanese, and Arabic. The deleted copula and its “cop” (copula) arcs are simply ignored, as shown by (4) in Figure 2. NP deletion: Noun-phrase (NP) deletion often results in null subjects or objects. It may be regarded as a branch of “Topic-prominence” but is a distinctive feature of Singlish with relatively high frequency of usage (Leimgruber, 2011). NP deletion is also common in pronoun-dropping languages such as Spanish and Italian, where the anaphora can be morphologically inferred. In one example, “Vorrei ora entrare brevemente nel merito.”5, from the Italian treebank in UD, “Vorrei” means “I would like to” and depends on the sentence root, “entrare”, with the “aux”(auxiliary) relation, where the subject “I” is absent but implicitly understood. Similarly, we do not recover such relations since the deleted NP imposes negligible alteration to the dependency tree, as exemplified by (5) in Figure 2. Inversion: Inversion in Singlish involves either keeping the subject and verb in interrogative sentences in the same order as in statements, or tag questions in polar interrogatives (Leimgruber, 2011). The former also exists in non-English languages, such as Spanish and Italian, where the subject can prepose the verb in questions (La5In English: (I) would now like to enter briefly on the merit (of the discussion). 1735 housse and Lamiroy, 2012). This simply involves a change of word orders and thus requires no special treatments. On the other hand, tag questions should be carefully analyzed in two scenarios. One type is in the form of “isn’t it?” or “haven’t you?”, which are dependents of the sentence root with the “parataxis” relation.6 The other type is exemplified as “right?”, and its Singlish equivalent “tio boh?” (a transliteration from Hokkien) are labeled with the “discourse” (discourse element) relation with respect to the sentence root. See example (6) in Figure 2. Discourse particles: Usage of clausal-final discourse particles, which originates from Hokkien and Cantonese, is one of the most typical feature of Singlish (Leimgruber, 2009, 2011; Lim, 2007). All discourse particles that appear in our treebank are summarized in Table A3 in Appendix A with the imported vocabulary:. These words express the tone of the sentence and thus have the “INTJ” (interjection) POS tag and depend on the root of the sentence or clause labeled with “discourse”, as is shown by the “leh” of (3) in Figure 2. The word “one” is a special instance of this type with the sole purpose being a tone marker in Singlish but not English, as shown by (7) in Figure 2. 3.3 Data Selection and Annotation Data Source: Singlish is used in written form mainly in social media and local Internet forums. After comparison, we chose the SG Talk Forum7 as our data source due to its relative abundance in Singlish contents. We crawled 84,459 posts using the Scrapy framework8 from pages dated up to 25th December 2016, retaining sentences of length between 5 and 50, which total 58,310. Sentences are reversely sorted according to the log likelihood of the sentence given by an English language model trained using the KenLM toolkit (Heafield et al., 2013)9 normalized by the sentence length, so that those most different from standard English can be chosen. Among the top 10,000 sentences, 1,977 sentences contain unique Singlish vocabularies defined by The 6In UD: Relation between the main verb of a clause and other sentential elements, such as sentential parenthetical clause, or adjacent sentences without any explicit coordination or subordination. 7http://sgTalk.com 8https://scrapy.org/ 9Trained using the afp eng and xin eng sources of English Gigaword Fifth Edition (Gigaword). Coxford Singlish Dictionary10, A Dictionary of Singlish and Singapore English11, and the Singlish Vocabulary Wikipedia page12. The average normalized log likelihood of these 10,000 sentences is -5.81, and the same measure for all sentences in UD-Eng is -4.81. This means these sentences with Singlish contents are 10 times less probable expressed as standard English than the UD-Eng contents in the web domain. This contrast indicates the degree of lexical deviation of Singlish from English. We chose 1,200 sentences from the first 10,000. More than 70% of the selected sentences are observed to consist of the Singlish grammars and imported vocabularies described in section 3.2. Thus the evaluations on this treebank can reflect the performance of various POS taggers and parsers on Singlish in general. Annotation: The chosen texts are divided by random selection into training, development, and testing sets according to the proportion of sentences in the training, development, and test division for UD-Eng, as summarized in Table 1. The sentences are tokenized using the NLTK Tokenizer,13 and then annotated using the Dependency Viewer.14 In total, all 17 UD-Eng POS tags and 41 out of the 47 UD-Eng dependency labels are present in the Singlish dependency treebank. Besides, 100 sentences are randomly selected and double annotated by one of the coauthors, and the inter-annotator agreement has a 97.76% accuracy on POS tagging and a 93.44% UAS and a 89.63% LAS for dependency parsing. A full summary of the numbers of occurrences of each POS tag and dependency label are included in Appendix A. 4 Part-of-Speech Tagging In order to obtain automatically predicted POS tags as features for a base English dependency parser, we train a POS tagger for UD-Eng using the baseline model of Chen et al. (2016), depicted in Figure 3. The bi-LSTM networks with a CRF layer (bi-LSTM-CRF) have shown state-of-the-art performance by globally optimizing the tag sequence (Huang et al., 2015; Chen et al., 2016). 10http://72.5.72.93/html/lexec.php 11http://www.singlishdictionary.com 12https://en.wikipedia.org/wiki/ Singlish_vocabulary 13http://www.nltk.org/api/nltk. tokenize.html 14http://nlp.nju.edu.cn/tanggc/tools/ DependencyViewer.exe 1736 x2 x1 … h1 h1 h2 h2 xn hn hn Tanh Tanh Tanh Linear Linear Linear CRF … … … … … t1 t2 tn … Output layer Feature layer Input layer Figure 3: Base POS tagger Based on this English POS tagging model, we train a POS tagger for Singlish using the featurelevel neural stacking model of Chen et al. (2016). Both the English and Singlish models consist of an input layer, a feature layer, and an output layer. 4.1 Base Bi-LSTM-CRF POS Tagger Input Layer: Each token is represented as a vector by concatenating a word embedding from a lookup table with a weighted average of its character embeddings given by the attention model of Bahdanau et al. (2014). Following Chen et al. (2016), the input layer produces a dense representation for the current input token by concatenating its word vector and the ones for its surrounding context tokens in a window of finite size. Feature Layer: This layer employs a bi-LSTM network to encode the input into a sequence of hidden vectors that embody global contextual information. Following Chen et al. (2016), we adopt bi-LSTM with peephole connections (Graves and Schmidhuber, 2005). Output layer: This is a CRF layer to predict the POS tags for the input words by maximizing the conditional probability of the sequence of tags given input sentence. 4.2 POS Tagger with Neural Stacking We adopt the deep integration neural stacking structure presented in Chen et al. (2016). As shown in Figure 4, the distributed vector representation for the target word at the input layer of the Singlish Tagger is augmented by concatenating the emission vector produced by the English Tagger with the original word and character-based embeddings, before applying the concatenation within a context window in section 4.1. During training, loss is back-propagated to all trainable parameters … h1 h1 h2 h2 hn hn Tanh Tanh Tanh Singlish Tagger output layer … … … English Tagger feature layer … x2 Linear x1 Linear xn Linear x1 x2 xn Output layer Feature layer Input layer Base English Tagger Figure 4: POS tagger with neural stacking System Accuracy ENG-on-SIN 81.39% Base-ICE-SIN 78.35% Stack-ICE-SIN 89.50% Table 2: POS tagging accuracies in both the Singlish Tagger and the pre-trained feature layer of the base English Tagger. At test time, the input sentence is fed to the integrated tagger model as a whole for inference. 4.3 Results We use the publicly available source code15 by Chen et al. (2016) to train a 1-layer biLSTM-CRF based POS tagger on UD-Eng, using 50-dimension pre-trained SENNA word embeddings (Collobert et al., 2011). We set the hidden layer size to 300, the initial learning rate for Adagrad (Duchi et al., 2011) to 0.01, the regularization parameter λ to 10−6, and the dropout rate to 15%. The tagger gives 94.84% accuracy on the UD-Eng test set after 24 epochs, chosen according to development tests, which is comparable to the stateof-the-art accuracy of 95.17% reported by Plank et al. (2016). We use these settings to perform 10fold jackknifing of POS tagging on the UD-Eng training set, with an average accuracy of 95.60%. Similarly, we trained a POS tagger using the Singlish dependency treebank alone with pretrained word embeddings on The Singapore Component of the International Corpus of English (ICE-SIN) (Nihilani, 1992; Ooi, 1997), which consists of both spoken and written texts. However, due to limited amount of training data, the 15https://github.com/chenhongshen/ NNHetSeq 1737 Output layer Input layer Feature layer x2 x1 … xn … … … … … … … … ℎ1 1 ℎ1 1 ℎ𝑚 1 ℎ𝑚 1 ℎ𝑚 2 ℎ𝑚 2 ℎ1 2 ℎ1 2 ℎ𝑚 𝑛 ℎ𝑚 𝑛 ℎ1 𝑛 ℎ1 𝑛 MLPd MLPh MLPd MLPh MLPd MLPh 1 … 1 1 … … … = … Hd + Hh U + 1 w S Figure 5: Base parser tagging accuracy is not satisfactory even with a larger dropout rate to avoid over-fitting. In contrast, the neural stacking structure on top of the English base model trained on UD-Eng achieves a POS tagging accuracy of 89.50%16, which corresponds to a 51.50% relative error reduction over the baseline Singlish model, as shown in Table 2. We use this for 10-fold jackknifing on Singlish parsing training data, and tagging the Singlish development and test data. 5 Dependency Parsing We adopt the Dozat and Manning (2017) parser17 as our base model, as displayed in Figure 5, and apply neural stacking to achieve improvements over the baseline parser. Both the base and neural stacking models consist of an input layer, a feature layer, and an output layer. 5.1 Base Parser with Bi-affine Attentions Input Layer: This layer encodes the current input word by concatenating a pre-trained word embedding with a trainable word embedding and POS tag embedding from the respective lookup tables. Feature Layer: The two recurrent vectors produced by the multi-layer bi-LSTM network from each input vector are concatenated and mapped to multiple feature vectors in lower-dimension space by a set of parallel multilayer perceptron (MLP) 16We empirically find that using ICE-SIN embeddings in neural stacking model performs better than using English SENNA embeddings. Similar findings are found for the parser, of which more details are given in section 6. 17https://github.com/tdozat/Parser … … … … … … … ℎ𝑚 𝑖 ℎ𝑚 𝑖 ℎ1 𝑖 ℎ1 𝑖 ℎ𝑚 𝑗 ℎ𝑚 𝑗 ℎ1 𝑗 ℎ1 𝑗 MLPd MLPh MLPd MLPh English Parser Bi-LSTM xi ℎ𝑚 𝑖 ℎ𝑚 𝑖 xi … MLPd MLPh MLPd MLPh … … … … … … … … Singlish Parser output layer + + … … … … … xj ℎ𝑚 𝑗 ℎ𝑚 𝑗 + + … … xj … … … Output layer Input layer Feature layer Base English Parser Figure 6: Parser with neural stacking layers. Following Dozat and Manning (2017), we adopt Cif-LSTM cells (Greff et al., 2016). Output Layer: This layer applies biaffine transformation on the feature vectors to calculate the score of the directed arcs between every pair of words. The inferred trees for input sentence are formed by choosing the head with the highest score for each word and a cross-entropy loss is calculated to update the model parameters. 5.2 Parser with Neural Stacking Inspired by the idea of feature-level neural stacking (Chen et al., 2016; Zhang and Weiss, 2016), we concatenate the pre-trained word embedding, trainable word and tag embeddings, with the two recurrent state vectors at the last bi-LSTM layer of the English Tagger as the input vector for each target word. In order to further preserve syntactic knowledge retained by the English Tagger, the feature vectors from its MLP layer is added to the ones produced by the Singlish Parser, as illustrated in Figure 6, and the scoring tensor of the Singlish Parser is initialized with the one from the trained English Tagger. Loss is back-propagated by reversely traversing all forward paths to all trainable parameter for training and the whole model is used collectively for inference. 6 Experiments 6.1 Experimental Settings We train an English parser on UD-Eng with the default model settings in Dozat and Manning (2017). 1738 Sentences Words Vocabulary GloVe6B N.A. 6000m 400,000 Giga100M 57,000 1.26m 54,554 ICE-SIN 87,084 1.26m 40,532 Table 3: Comparison of the scale of sources for training word embeddings Trained on System UAS LAS English ENG-on-SIN 75.89 65.62 Baseline 75.98 66.55 Singlish Base-Giga100M 77.67 67.23 Base-GloVe6B 78.18 68.51 Base-ICE-SIN 79.29 69.27 Both ENG-plus-SIN 82.43 75.64 Stack-ICE-SIN 84.47 77.76 Table 4: Dependency parser performances It achieves an UAS of 88.83% and a LAS of 85.20%, which are close to the state-of-the-art 85.90% LAS on UD-Eng reported by Ammar et al. (2016), and the main difference is caused by us not using fine-grained POS tags. We apply the same settings for a baseline Singlish parser. We attempt to choose a better configuration of the number of bi-LSTM layers and the hidden dimension based on the development set performance, but the default settings turn out to perform the best. Thus we stick to all default hyper-parameters in Dozat and Manning (2017) for training the Singlish parsers. We experimented with different word embeddings, as with the raw text sources summarized in Table 3 and further described in section 6.2. When using the neural stacking model, we fix the model configuration for the base English parser model and choose the size of the hidden vector and the number of bi-LSTM layers stacked on top based on the performance on the development set. It turns out that a 1-layer bi-LSTM with 900 hidden dimension performs the best, where the bigger hidden layer accommodates the elongated input vector to the stacked bi-LSTM and the fewer number of recurrent layers avoids over-fitting on the small Singlish dependency treebank, given the deep bi-LSTM English parser network at the bottom. The evaluation of the neural stacking model is further described in section 6.3. System UAS LAS Base-ICE-SIN 77.00 66.69 Stack-ICE-SIN 82.43 73.96 Table 5: Dependency parser performances by the 5-cross-fold validation 6.2 Investigating Distributed Lexical Characteristics In order to learn characteristics of distributed lexical semantics for Singlish, we compare performances of the Singlish dependency parser using several sets of pre-trained word embeddings: GloVe6B, large-scale English word embeddings18; ICE-SIN, Singlish word embeddings trained using GloVe (Pennington et al., 2014) on the ICE-SIN (Nihilani, 1992; Ooi, 1997) corpus; Giga100M, a small-scale English word embeddings trained using GloVe (Pennington et al., 2014) with the same settings on a comparable size of English data randomly selected from the English Gigaword Fifth Edition for a fair comparison with ICE-SIN embeddings. First, the English Giga100M embeddings marginally improve the Singlish parser from the baseline without pre-trained embeddings and also using the UD-Eng parser directly on Singlish, represented as “ENG-on-SIN” in Table 4. With much more English lexical semantics being fed to the Singlish parser using the English GloVe6B embeddings, further enhancement is achieved. Nevertheless, the Singlish ICE-SIN embeddings lead to even more improvement, with 13.78% relative error reduction, compared with 7.04% using the English Giga100M embeddings and 9.16% using the English GloVe6B embeddings, despite the huge difference in sizes in the latter case. This demonstrates the distributional differences between Singlish and English tokens, even though they share a large vocabulary. More detailed comparison is described in section 6.4. 6.3 Knowledge Transfer Using Neural Stacking We train a parser with neural stacking and Singlish ICE-SIN embeddings, which achieves the best performance among all the models, with a UAS of 84.47%, represented as “Stack-ICE-SIN” in Table 4, which corresponds to 25.01% relative error reduction compared to the baseline. This demonstrates that knowledge from English can be successfully incorporated to boost the Singlish parser. To further evaluate the effectiveness of the neural stacking model, we also trained a base model with the combination of UD-Eng and the Singlish tree18Trained with Wikipedia 2014 the Gigaword. Downloadable from http://nlp.stanford.edu/data/ glove.6B.zip 1739 Topic Prominence Copula Deletion NP Deletion Discourse Particles Others Sentences 15 19 21 51 67 UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS ENG-on-SIN 78.15 62.96 66.91 56.83 72.57 64.00 70.00 59.00 78.92 68.47 Base-Giga100M 77.78 68.52 71.94 61.15 76.57 69.14 85.25 77.25 73.13 60.63 Base-ICE 81.48 72.22 74.82 63.31 80.00 73.71 85.25 77.75 75.56 64.37 Stack-ICE 87.04 76.85 77.70 71.22 80.00 75.43 88.50 83.75 84.14 76.49 Table 6: Error analysis with respect to grammar types bank, represented as “ENG-plus-SIN” in Table 4, which is still outperformed by the neural stacking model. Besides, we performed a 5-cross-fold validation for the base parser with Singlish ICE-SIN embeddings and the parser using neural stacking, where half of the held-out fold is used as the development set. The average UAS and LAS across the 5 folds shown in Table 5 and the relative error reduction on average 23.61% suggest that the overall improvement from knowledge transfer using neural stacking remains consistent. This significant improvement is further explained in section 6.4. 6.4 Improvements over Grammar Types To analyze the sources of improvements for Singlish parsing using different model configurations, we conduct error analysis over 5 syntactic categories19, including 4 types of grammars mentioned in section 3.220, and 1 for all other cases, including sentences containing imported vocabularies but expressed in basic English syntax. The number of sentences and the results in each group of the test set are shown in Table 6. The neural stacking model leads to the biggest improvement over all categories except for a tie UAS performance on “NP Deletion” cases, which explains the significant overall improvement. Comparing the base model with ICE-SIN embeddings with the base parser trained on UD-Eng, which contain syntactic and semantic knowledge in Singlish and English, respectively, the former outperforms the latter on all 4 types of Singlish grammars but not for the remaining samples. This suggests that the base English parser mainly contributes to analyzing basic English syntax, while the base Singlish parser models unique Singlish grammars better. Similar trends are also observed on the base model using the English Giga100M embeddings, but the overall performances are not as good as 19Multiple labels are allowed for one sentence. 20The “Inversion” type of grammar is not analyzed since there is only 1 such sentence in the test set. using ICE-SIN embeddings, especially over basic English syntax where it undermines the performance to a greater extent. This suggests that only limited English distributed lexical semantic information can be integrated to help modelling Singlish syntactic knowledge due to the differences in distributed lexical semantics. 7 Conclusion We have investigated dependency parsing for Singlish, an important English-based creole language, through annotations of a Singlish dependency treebank with 10,986 words and building an enhanced parser by leveraging on knowledge transferred from a 20-times-bigger English treebank of Universal Dependencies. We demonstrate the effectiveness of using neural stacking for feature transfer by boosting the Singlish dependency parsing performance to from UAS 79.29% to UAS 84.47%, with a 25.01% relative error reduction over the parser with all available Singlish resources. We release the annotated Singlish dependency treebank, the trained model and the source code for the parser with free public access. Possible future work include expanding the investigation to other regional languages such as Malay and Indonesian. Acknowledgments Yue Zhang is the corresponding author. This research is supported by IGDSS1603031 from Temasek Laboratories@SUTD. We appreciate anonymous reviewers for their insightful comments, which helped to improve the paper, and Zhiyang Teng, Jiangming Liu, Yupeng Liu, and Enrico Santus for their constructive discussions. 1740 References Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016. Many languages, one parser. Transactions of the Association of Computational Linguistics 4:431–444. http://aclweb.org/anthology/Q16-1031. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the ACL 2016. Association for Computational Linguistics, pages 2442–2452. https://doi.org/10.18653/v1/P16-1231. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint abs/1409.0473. http://arxiv.org/abs/1409.0473. Miguel Ballesteros, Chris Dyer, and A. Noah Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. In Proceedings of the EMNLP 2015. Association for Computational Linguistics, pages 349–359. https://doi.org/10.18653/v1/D15-1041. Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. 2012. English web treebank ldc2012t13 . Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the EMNLP 2014. Association for Computational Linguistics, pages 740–750. https://doi.org/10.3115/v1/D14-1082. Hongshen Chen, Yue Zhang, and Qun Liu. 2016. Neural network for heterogeneous annotations. In Proceedings of the EMNLP 2016. Association for Computational Linguistics, pages 731–741. http://aclweb.org/anthology/D16-1070. Shay Cohen and A. Noah Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proceedings of the NAACL-HLT 2009. Association for Computational Linguistics, pages 74–82. http://aclweb.org/anthology/N09-1009. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493–2537. http://dl.acm.org/citation.cfm?id=2078186. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In International Conference on Learning Representations 2017. volume abs/1611.01734. http://arxiv.org/abs/1611.01734. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12:2121–2159. http://dl.acm.org/citation.cfm?id=2021068. Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. 2015. A neural network model for low-resource universal dependency parsing. In Proceedings of the EMNLP 2015. Association for Computational Linguistics, pages 339–348. https://doi.org/10.18653/v1/D15-1040. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and A. Noah Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the ACL-IJCNLP 2015. Association for Computational Linguistics, pages 334–343. https://doi.org/10.3115/v1/P151033. Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proceedings of the ACL-IJCNLP 2009. Association for Computational Linguistics, pages 369–377. http://aclweb.org/anthology/P09-1042. Douwe Gelling, Trevor Cohn, Phil Blunsom, and Joao Graca. 2012. The pascal challenge on grammar induction. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure. Association for Computational Linguistics, pages 64–80. http://www.aclweb.org/anthology/W12-1909. Jennifer Gillenwater, Kuzman Ganchev, Jo˜ao Grac¸a, Fernando Pereira, and Ben Taskar. 2010. Sparsity in dependency grammar induction. In Proceedings of the ACL 2010 (Short Papers). Association for Computational Linguistics, pages 194–199. http://www.aclweb.org/anthology/P10-2036. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks 18(5):602–610. K. Greff, R. K. Srivastava, J. Koutnk, B. R. Steunebrink, and J. Schmidhuber. 2016. Lstm: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems PP(99):1–11. https://doi.org/10.1109/TNNLS.2016.2582924. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In Proceedings of the ACL-IJCNLP 2015. Association for Computational Linguistics, pages 1234– 1244. https://doi.org/10.3115/v1/P15-1119. Shinichi Harada. 2009. The roles of singapore standard english and singlish. Information Research 40:70– 82. 1741 Kenneth Heafield, Ivan Pouzyrevsky, H. Jonathan Clark, and Philipp Koehn. 2013. Scalable modified kneser-ney language model estimation. In Proceedings of the ACL 2013 (Short Papers). Association for Computational Linguistics, pages 690–696. http://aclweb.org/anthology/P13-2121. Sanjika Hewavitharana, Nguyen Bach, Qin Gao, Vamshi Ambati, and Stephan Vogel. 2011. Proceedings of the Sixth Workshop on Statistical Machine Translation, Association for Computational Linguistics, chapter CMU Haitian Creole-English Translation System for WMT 2011, pages 386–392. http://aclweb.org/anthology/W11-2146. Chang Hu, Philip Resnik, Yakov Kronrod, Vladimir Eidelman, Olivia Buzek, and B. Benjamin Bederson. 2011. Proceedings of the Sixth Workshop on Statistical Machine Translation, Association for Computational Linguistics, chapter The Value of Monolingual Crowdsourcing in a Real-World Translation Scenario: Simulation using Haitian Creole Emergency SMS Messages, pages 399–404. http://aclweb.org/anthology/W11-2148. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint abs/1508.01991. http://arxiv.org/abs/1508.01991. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering 11(3):311– 325. https://doi.org/10.1017/S1351324905003840. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association of Computational Linguistics 4:313– 327. http://aclweb.org/anthology/Q16-1023. Karen Lahousse and B´eatrice Lamiroy. 2012. Word order in french, spanish and italian: A grammaticalization account. Folia Linguistica 46(2):387–415. Jakob R. E. Leimgruber. 2009. Modelling variation in Singapore English. Ph.D. thesis, Oxford University. Jakob R. E. Leimgruber. 2011. Singapore english. Language and Linguistics Compass 5(1):47–62. https://doi.org/10.1111/j.1749-818X.2010.00262.x. Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of the EMNLP 2015. Association for Computational Linguistics, pages 2304–2314. https://doi.org/10.18653/v1/D15-1278. Lisa Lim. 2007. Mergers and acquisitions: on the ages and origins of singapore english particles. World Englishes 26(4):446–473. H´ector Mart´ınez Alonso, ˇZeljko Agi´c, Barbara Plank, and Anders Søgaard. 2017. Parsing universal dependencies without training. In Proceedings of the EACL 2017. Association for Computational Linguistics, pages 230–240. http://www.aclweb.org/anthology/E17-1022. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the EMNLP 2011. Association for Computational Linguistics, pages 62–72. http://aclweb.org/anthology/D11-1006. Ho Mian-Lian and John T. Platt. 1993. Dynamics of a contact continuum: Singaporean English. Oxford University Press, USA. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the ACL 2016. Association for Computational Linguistics, pages 1105– 1116. https://doi.org/10.18653/v1/P16-1105. Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the ACL 2012. Association for Computational Linguistics, pages 629–637. http://aclweb.org/anthology/P12-1066. Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowledge to guide grammar induction. In Proceedings of the EMNLP 2010. Association for Computational Linguistics, Cambridge, MA, pages 1234– 1244. http://www.aclweb.org/anthology/D10-1120. Paroo Nihilani. 1992. The international computerized corpus of english. Words in a cultural context. Singapore: UniPress pages 84–88. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the LREC 2016. European Language Resources Association. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007. Association for Computational Linguistics, pages 915– 932. http://www.aclweb.org/anthology/D/D07/D071096. Vincent B Y Ooi. 1997. Analysing the Singapore ICE corpus for lexicographic evidence. ENGLISH LANGUAGE & LITERATURE. http://scholarbank.nus.edu.sg/handle/10635/133118. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the EMNLP 2014. Association for Computational Linguistics, pages 1532–1543. https://doi.org/10.3115/v1/D14-1162. 1742 Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the ACL 2016 (Short Papers). Association for Computational Linguistics, pages 412–418. https://doi.org/10.18653/v1/P16-2067. Chun-Wei Seah, Hai Leong Chieu, Kian Ming Adam Chai, Loo-Nin Teow, and Lee Wei Yeong. 2015. Troll detection by domain-adapting sentiment analysis. In 18th International Conference on Information Fusion (Fusion) 2015. IEEE, pages 792–799. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D. Christopher Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the EMNLP 2013. Association for Computational Linguistics, pages 1631– 1642. http://aclweb.org/anthology/D13-1170. Anders Søgaard. 2012a. Two baselines for unsupervised dependency parsing. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure. Association for Computational Linguistics, pages 81–83. http://www.aclweb.org/anthology/W12-1910. Anders Søgaard. 2012b. Unsupervised dependency parsing without training. Natural Language Engineering 18(2):187203. https://doi.org/10.1017/S1351324912000022. Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the NAACL-HLT 2012. Association for Computational Linguistics, pages 477–487. http://aclweb.org/anthology/N12-1052. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the ACL-IJCNLP 2015. Association for Computational Linguistics, pages 323–333. https://doi.org/10.3115/v1/P15-1032. Yuan Zhang and Regina Barzilay. 2015. Hierarchical low-rank tensors for multilingual transfer parsing. In Proceedings of the EMNLP 2015. Association for Computational Linguistics, pages 1857– 1867. https://doi.org/10.18653/v1/D15-1213. Yuan Zhang and David Weiss. 2016. Stackpropagation: Improved representation learning for syntax. In Proceedings of the 54th ACL. Association for Computational Linguistics, pages 1557– 1566. https://doi.org/10.18653/v1/P16-1147. Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structuredprediction model for transition-based dependency parsing. In Proceedings of the ACL-IJCNLP 2015. Association for Computational Linguistics, pages 1213–1222. https://doi.org/10.3115/v1/P15-1117. A Statistics of Singlish Dependency Treebank POS Tags ADJ 782 INTJ 556 PUNCT 1604 ADP 490 NOUN 1779 SCONJ 126 ADV 941 NUM 153 SYM 11 AUX 429 PART 355 VERB 1704 CONJ 167 PRON 682 X 10 DET 387 PROPN 810 Table A1: Statistics of POS tags Dependency labels acl 37 dobj 612 acl:relcl 29 expl 10 advcl 194 iobj 15 advmod 859 list 10 appos 18 mwe 105 amod 423 name 117 aux 377 neg 261 auxpass 47 nmod 398 case 463 nmod:npmod 26 cc 167 nmod:poss 153 ccomp 138 nmod:tmod 81 compound 420 nsubj 1005 compound:prt 30 nsubjpass 34 conj 238 nummod 94 cop 152 mark 275 csubj 30 parataxis 241 det 304 punct 1607 det:predet 7 remnant 17 discourse 552 vocative 41 dislocated 2 xcomp 190 Table A2: Statistics of dependency labels ah aiyah ba hah / har / huh hiak hiak hiak hor huat la / lah lau leh loh / lor ma / mah wahlow / wah lau wa / wah ya ya walaneh / wah lan eh Table A3: List of discourse particles 1743 A-B act blur ah beng ah ne angpow arrowed ang ku kueh angmoh/ang moh ahpek / ah peks atas boh/bo boho jiak boh pian buay lin chu buen kuey C chai tow kway chao ah beng chap chye png char kway teow chee cheong fun / che cheong fen cheesepie cheong / chiong chiam / cham chiak liao bee / jiao liao bee chio ching chong chio bu / chiobu chui chop chop chow-angmoh chwee kueh D-F dey diam diam die kock standing die pain pain dun eat grass flip prata fried beehoon G gahmen / garment gam geylang gone case gong kia goreng pisang gui H-J hai si lang heng hiong hoot Hosay / ho say how lian jepun kia / jepun kias jialat / jia lak / jia lat K ka kaki kong kaki song kancheong kateks kautim kay kiang kayu kee chia kee siao kelong kena / kana kiam kiasu ki seow kkj kong si mi kopi kopi lui kopi-o kosong koyok ku ku bird L lagi lai liao laksa lao jio kong lao sai lau chwee nua liao / ler like dat / like that lim peh lobang M mahjong kaki makan masak masak mati mee mee pok mee rebus mee siam mee sua mei mei N-S nasi lemak pang sai piak sabo sai same same sia sianz / sian sia suay sibeh siew dai siew siew dai simi taisee soon kuey sotong suay / suey swee T tahan tak pakai te te kee tong tua tikopeh tio tio pian/dio pian talk cock / talk cock sing song U-Z umm zai up lorry / up one’s lorry xiao zhun / buay zhun Table A4: List of imported vocabularies 1744
2017
159
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 168–178 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1016 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 168–178 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1016 Automatically Generating Rhythmic Verse with Neural Networks Jack Hopkins Computer Laboratory University of Cambridge [email protected] Douwe Kiela Facebook AI Research [email protected] Abstract We propose two novel methodologies for the automatic generation of rhythmic poetry in a variety of forms. The first approach uses a neural language model trained on a phonetic encoding to learn an implicit representation of both the form and content of English poetry. This model can effectively learn common poetic devices such as rhyme, rhythm and alliteration. The second approach considers poetry generation as a constraint satisfaction problem where a generative neural language model is tasked with learning a representation of content, and a discriminative weighted finite state machine constrains it on the basis of form. By manipulating the constraints of the latter model, we can generate coherent poetry with arbitrary forms and themes. A large-scale extrinsic evaluation demonstrated that participants consider machine-generated poems to be written by humans 54% of the time. In addition, participants rated a machinegenerated poem to be the most human-like amongst all evaluated. 1 Introduction Poetry is an advanced form of linguistic communication, in which a message is conveyed that satisfies both aesthetic and semantic constraints. As poetry is one of the most expressive forms of language, the automatic creation of texts recognisable as poetry is difficult. In addition to requiring an understanding of many aspects of language including phonetic patterns such as rhyme, rhythm and alliteration, poetry composition also requires a deep understanding of the meaning of language. Poetry generation can be divided into two subtasks, namely the problem of content, which is concerned with a poem’s semantics, and the problem of form, which is concerned with the aesthetic rules that a poem follows. These rules may describe aspects of the literary devices used, and are usually highly prescriptive. Examples of different forms of poetry are limericks, ballads and sonnets. Limericks, for example, are characterised by their strict rhyme scheme (AABBA), their rhythm (two unstressed syllables followed by one stressed syllable) and their shorter third and fourth lines. Creating such poetry requires not only an understanding of the language itself, but also of how it sounds when spoken aloud. Statistical text generation usually requires the construction of a generative language model that explicitly learns the probability of any given word given previous context. Neural language models (Schwenk and Gauvain, 2005; Bengio et al., 2006) have garnered signficant research interest for their ability to learn complex syntactic and semantic representations of natural language (Mikolov et al., 2010; Sutskever et al., 2014; Cho et al., 2014; Kim et al., 2015). Poetry generation is an interesting application, since performing this task automatically requires the creation of models that not only focus on what is being written (content), but also on how it is being written (form). We experiment with two novel methodologies for solving this task. The first involves training a model to learn an implicit representation of content and form through the use of a phonological encoding. The second involves training a generative language model to represent content, which is then constrained by a discriminative pronunciation model, representing form. This second model is of particular interest because poetry with arbitrary rhyme, rhythm, repetition and themes can be generated by tuning the pronunciation model. 168 2 Related Work Automatic poetry generation is an important task due to the significant challenges involved. Most systems that have been proposed can loosely be categorised as rule-based expert systems, or statistical approaches. Rule-based poetry generation attempts include case-based reasoning (Gerv´as, 2000), templatebased generation (Colton et al., 2012), constraint satisfaction (Toivanen et al., 2013; Barbieri et al., 2012) and text mining (Netzer et al., 2009). These approaches are often inspired by how humans might generate poetry. Statistical approaches, conversely, make no assumptions about the creative process. Instead, they attempt to extract statistical patterns from existing poetry corpora in order to construct a language model, which can then be used to generate new poetic variants (Yi et al., 2016; Greene et al., 2010). Neural language models have been increasingly applied to the task of poetry generation. The work of Zhang and Lapata (2014) is one such example, where they were able to outperform all other classical Chinese poetry generation systems with both manual and automatic evaluation. Ghazvininejad et al. (2016) and Goyal et al. (2016) apply neural language models with regularising finite state machines. However, in the former case the rhythm of the output cannot be defined at sample time, and in the latter case the finite state machine is not trained on rhythm at all, as it is trained on dialogue acts. McGregor et al. (2016) construct a phonological model for generating prosodic texts, however there is no attempt to embed semantics into this model. 3 Phonetic-level Model Our first model is a pure neural language model, trained on a phonetic encoding of poetry in order to represent both form and content. Phonetic encodings of language represent information as sequences of around 40 basic acoustic symbols. Training on phonetic symbols allows the model to learn effective representations of pronunciation, including rhyme and rhythm. However, just training on a large corpus of poetry data is not enough. Specifically, two problems need to be overcome. 1) Phonetic encoding results in information loss: words that have the same pronunciation (homophones) cannot be perfectly reconstructed from the corresponding phonemes. This means that we require an additional probabilistic model in order to determine the most likely word given a sequence of phonemes. 2) The variety of poetry and poetic devices one can use— e.g., rhyme, rhythm, repetition—means that poems sampled from a model trained on all poetry would be unlikely to maintain internal consistency of meter and rhyme. It is therefore important to train the model on poetry which has its own internal consistency. Thus, the model comprises three steps: transliterating an orthographic sequence to its phonetic representation, training a neural language model on the phonetic encoding, and decoding the generated sequence back from phonemes to orthographic symbols. Phonetic encoding To solve the first step, we apply a combination of word lookups from the CMU pronunciation dictionary (Weide, 2005) with letter-to-sound rules for handling out-ofvocabulary words. These rules are based on the CART techniques described by Black et al. (1998), and are represented with a simple Finite State Transducer1. The number of letters and number of phones in a word are rarely a one-to-one match: letters may match with up to three phones. In addition, virtually all letters can, in some contexts, map to zero phones, which is known as ‘wild’ or epsilon. Expectation Maximisation is used to compute the probability of a single letter matching a single phone, which is maximised through the application of Dynamic Time Warping (Myers et al., 1980) to determine the most likely position of epsilon characters. Although this approach offers full coverage over the training corpus—even for abbreviated words like ask’d and archaic words like renewest—it has several limitations. Irregularities in the English language result in difficulty determining general letter-to-sound rules that can manage words with unusual pronunciations such as “colonel” and “receipt” 2. In addition to transliterating words into phoneme sequences, we also represent word break characters as a specific symbol. This makes 1Implemented using FreeTTS (Walker et al., 2010) 2An evaluation of models in American English, British English, German and French was undertaken by Black et al. (1998), who reported an externally validated per token accuracy on British English as low as 67%. Although no experiments were carried out on corpora of early-modern English, it is likely that this accuracy would be significantly lower. 169 decipherment, when converting back into an orthographic representation, much easier. Phonetic transliteration allows us to construct a phonetic poetry corpus comprising 1,046,536 phonemes. Neural language model We train a Long-Short Term Memory network (Hochreiter and Schmidhuber, 1997) on the phonetic representation of our poetry corpus. The model is trained using stochastic gradient descent to predict the next phoneme given a sequence of phonemes. Specifically, we maximize a multinomial logistic regression objective over the final softmax prediction. Each phoneme is represented as a 256-dimensional embedding, and the model consists of two hidden layers of size 256. We apply backpropagationthrough-time (Werbos, 1990) for 150 timesteps, which roughly equates to four lines of poetry in sonnet form. This allows the network to learn features like rhyme even when spread over multiple lines. Training is preemptively stopped at 25 epochs to prevent overfitting. Orthographic decoding When decoding from phonemes back to orthographic symbols, the goal is to compute the most likely word corresponding to a sequence of phonemes. That is, we compute the most probable hypothesis word W given a phoneme sequence ρ: arg maxi P ( Wi | ρ ) (1) We can consider the phonetic encoding of plaintext to be a homophonic cipher; that is, a cipher in which each symbol can correspond to one or more possible decodings. The problem of homophonic decipherment has received significant research attention in the past; with approaches utilising Expectation Maximisation (Knight et al., 2006), Integer Programming (Ravi and Knight, 2009) and A* search (Corlett and Penn, 2010). Transliteration from phonetic to an orthographic representation is done by constructing a Hidden Markov Model using the CMU pronunciation dictionary (Weide, 2005) and an n-gram language model. We calculate the transition probabilities (using the n-gram model) and the emission matrix (using the CMU pronunciation dictionary) to determine pronunciations that correspond to a single word. All pronunciations are naively considered equiprobable. We perform Viterbi decoding to find the most likely sequence of words. This means finding the most likely word wt+1 given a And humble and their fit flees are wits size but that one made and made thy step me lies ————————————— Cool light the golden dark in any way the birds a shade a laughter turn away ————————————— Then adding wastes retreating white as thine She watched what eyes are breathing awe what shine ————————————— But sometimes shines so covered how the beak Alone in pleasant skies no more to seek Figure 1: Example output of the phonetic-level model trained on Iambic Pentameter poetry (grammatical errors are emphasised). previous word sequence (wt−n, ..., wt). arg maxwt+1 P ( wt+1 | w1, ... , wt ) (2) If a phonetic sequence does not map to any word, we apply the heuristic of artificially breaking the sequence up into two subsequences at index n, such that n maximises the n-gram frequency of the subsequences. Output A popular form of poetry with strict internal structure is the sonnet. Popularised in English by Shakespeare, the sonnet is characterised by a strict rhyme scheme and exactly fourteen lines of Iambic Pentameter (Greene et al., 2010). Since the 17,134 word tokens in Shakespeare’s 153 sonnets are insufficient to train an effective model, we augment this corpus with poetry taken from the website sonnets.org, yielding a training set of 288,326 words and 1,563,457 characters. An example of the output when training on this sonnets corpus is provided in Figure 1. Not only is it mostly in strict Iambic Pentameter, but the grammar of the output is mostly correct and the poetry contains rhyme. 4 Constrained Character-level Model As the example shows, phonetic-level language models are effective at learning poetic form, despite small training sets and relatively few parameters. However, the fact that they require training data with internal poetic consistency implies that they do not generalise to other forms of poetry. That is, in order to generate poetry in Dactylic Hexameter (for example), a phonetic model must be trained on a corpus of Dactylic poetry. Not only is this impractical, but in many cases no corpus of 170 adequate size even exists. Even when such poetic corpora are available, a new model must be trained for each type of poetry. This precludes tweaking the form of the output, which is important when generating poetry automatically. We now explore an alternative approach. Instead of attempting to represent both form and content in a single model, we construct a pipeline containing a generative language model representing content, and a discriminative model representing form. This allows us to represent the problem of creating poetry as a constraint satisfaction problem, where we can modify constraints to restrict the types of poetry we generate. Character Language Model Rather than train a model on data representing features of both content and form, we now use a simple character-level model (Sutskever et al., 2011) focused solely on content. This approach offers several benefits over the word-level models that are prevalent in the literature. Namely, their more compact vocabulary allows for more efficient training; they can learn common prefixes and suffixes to allow us to sample words that are not present in the training corpus and can learn effective language representations from relatively small corpora; and they can handle archaic and incorrect spellings of words. As we no longer need the model to explicitly represent the form of generated poetry, we can loosen our constraints when choosing a training corpus. Instead of relying on poetry only in sonnet form, we can instead construct a generic corpus of poetry taken from online sources. This corpus is composed of 7.56 million words and 34.34 million characters, taken largely from 20th Century poetry books found online. The increase in corpus size facilitates a corresponding increase in the number of permissible model parameters. This allows us to train a 3-layer LSTM model with 2048dimensional hidden layers, with embeddings in 128 dimensions. The model was trained to predict the next character given a sequence of characters, using stochastic gradient descent. We attenuate the learning rate over time, and by 20 epochs the model converges. Rhythm Modeling Although a character-level language model trained on a corpus of generic poetry allows us to generate interesting text, internal irregularities and noise in the training data prevent the model from learning important features such as rhythm. Hence, we require an additional classifier to constrain our model by either accepting or rejecting sampled lines based on the presence or absence of these features. As the presence of meter (rhythm) is the most characteristic feature of poetry, it therefore must be our primary focus. Pronunciation dictionaries have often been used to determine the syllabic stresses of words (Colton et al., 2012; Manurung et al., 2000; Misztal and Indurkhya, 2014), but suffer from some limitations for constructing a classifier. All word pronunciations are considered equiprobable, including archaic and uncommon pronunciations, and pronunciations are provided context free, despite the importance of context for pronunciation3. Furthermore, they are constructed from American English, meaning that British English may be misclassified. These issues are circumvented by applying lightly supervised learning to determine the contextual stress pattern of any word. That is, we exploit the latent structure in our corpus of sonnet poetry, namely, the fact that sonnets are composed of lines in rigid Iambic Pentameter, and are therefore exactly ten syllables long with alternating syllabic stress. This allows us to derive a syllablestress distribution. Although we use the sonnets corpus for this, it is important to note that any corpus with such a latent structure could be used. We represent each line of poetry as a cascade of Weighted Finite State Transducers (WFST). A WFST is a finite-state automaton that maps between two sets of symbols. It is defined as an eight-tuple where ⟨Q, Σ, ρ, I, F, ∆, λ, p⟩: Q : A set of states Σ : An input alphabet of symbols ρ : An output alphabet of symbols I : A set of initial states F : A set of final states, or sinks ∆ : A transition function mapping pairs of states and symbols to sets of states λ : A set of weights for initial states P : A set of weights for final states 3For example, the independent probability of stressing the single syllable word at is 40%, but this increases to 91% when the following word is the (Greene et al., 2010) 171 A WFST assigns a probability (or weight, in the general case) to each path through it, going from an initial state to an end state. Every path corresponds to an input and output label sequence, and there can be many such paths for each sequence. WFSTs are often used in a cascade, where a number of machines are executed in series, such that the output tape of one machine is the input tape for the next. Formally, a cascade is represented by the functional composition of several machines. W(x, z) = A(x|y) ◦B(y|z) ◦C(z) (3) Where W(x, z) is defined as the ⊕sum of the path probabilities through the cascade, and x and z are an input sequence and output sequence respectively. In the real semiring (where the product of probabilities are taken in series, and the sum of the probabilities are taken in parallel), we can rewrite the definition of weighted composition to produce the following: W(x, z) = ⊕ y A(x | y) ⊗B(y | z) ⊗C(z) (4) As we are dealing with probabilities, this can be rewritten as: P(x, z) = ∑ y P(x | y)P(y | z)P(z) (5) We can perform Expectation Maximisation over the poetry corpus to obtain a probabilistic classifier which enables us to determine the most likely stress patterns for each word. Every word is represented by a single transducer. In each cascade, a sequence of input words is mapped onto a sequence of stress patterns ⟨×, /⟩ where each pattern is between 1 and 5 syllables in length4. We initially set all transition probabilities equally, as we make no assumptions about the stress distributions in our training set. We then iterate over each line of the sonnet corpus, using Expectation Maximisation to train the cascades. In practice, there are several de facto variations of Iambic meter which are permissible, as shown in Figure 2. We train the rhythm classifier by converging the cascades to whatever output is the most likely given the line. 4Words of more than 5 syllables comprise less than 0.1% of the lexicon (Aoyama and Constable, 1998). × / × / × / × / × / / × × / × / × / × / × / × / × / × / × / × / × × / × / × / × / × Figure 2: Permissible variations of Iambic Pentameter in Shakespeare’s sonnets. Generic poetry Sonnet poetry LSTM WFST Rhythmic Output Trained Trained Buffer Constraining the model To generate poetry using this model, we sample sequences of characters from the character-level language model. To impose rhythm constrains on the language model, we first represent these sampled characters at the word level and pool sampled characters into word tokens in an intermediary buffer. We then apply the separately trained word-level WFSTs to construct a cascade of this buffer and perform Viterbi decoding over the cascade. This defines the distribution of stress-patterns over our word tokens. We can represent this cascade as a probabilistic classifier, and accept or reject the buffered output based on how closely it conforms to the desired meter. While sampling sequences of words from this model, the entire generated sequence is passed to the classifier each time a new word is sampled. The pronunciation model then returns the probability that the entire line is within the specified meter. If a new word is rejected by the classifier, the state of the network is rolled back to the last formulaically acceptable state of the line, removing the rejected word from memory. The constraint on rhythm can be controlled by adjusting the acceptability threshold of the classifier. By increasing the threshold, output focuses on form over content. Conversely, decreasing the criterion puts greater emphasis on content. 172 Themed Training Set Poetry LSTM Themed Output Training Set Poetry LSTM Themed Output Thematic Boosting Implicit Explicit Figure 3: Two approaches for generating themed poetry. 4.1 Themes and Poetic devices It is important for any generative poetry model to include themes and poetic devices. One way to achieve this would be by constructing a corpus that exhibits the desired themes and devices. To create a themed corpus about ‘love’, for instance, we would aggregate love poetry to train the model, which would thus learn an implicit representation of love. However, this forces us to generate poetry according to discrete themes and styles from pretrained models, requiring a new training corpus for each model. In other words, we would suffer from similar limitations as with the phonetic-level model, in that we require a dedicated corpus. Alternatively, we can manipulate the language model by boosting character probabilities at sample time to increase the probability of sampling thematic words like ‘love’. This approach is more robust, and provides us with more control over the final output, including the capacity to vary the inclusion of poetic devices in the output. Themes In order to introduce thematic content, we heuristically boost the probability of sampling words that are semantically related to a theme word from the language model. First, we compile a list of similar words to a key theme word by retrieving its semantic neighbours from a distributional semantic model (Mikolov et al., 2013). For example, the theme winter might include thematic words frozen, cold, snow and frosty. We represent these semantic neighbours at the character level, and heuristically boost their probability by multiplying the sampling probability of these character strings by their cosine similarity to the key word, plus a constant. Thus, the likelihood of sampling a thematically related word is artificially increased, while still constraining the model rhythmically. Errors per line 1 2 3 4 Total Phonetic Model 11 2 3 1 28 Character Model + WFST 6 5 1 1 23 Character Model 3 8 7 7 68 Table 1: Number of lines with n errors from a set of 50 lines generated by each of the three models. Poetic devices A similar method may be used for poetic devices such as assonance, consonance and alliteration. Since these devices can be orthographically described by the repetition of identical sequences of characters, we can apply the same heuristic to boost the probability of sampling character strings that have previously been sampled. That is, to sample a line with many instances of alliteration (multiple words with the same initial sound) we record the historical frequencies of characters sampled at the beginning of each previous word. After a word break character, we boost the probability that those characters will be sampled again in the softmax. We only keep track of frequencies for a fixed number of time steps. By increasing or decreasing the size of this window, we can manipulate the prevalence of alliteration. Variations of this approach are applied to invoke consonance (by boosting intra-word consonants) and assonance (by boosting intra-word vowels). An example of two sampled lines with high degrees of alliteration, assonance and consonance is given in Figure 4c. 5 Evaluation In order to examine how effective our methodologies for generating poetry are, we evaluate the proposed models in two ways. First, we perform an intrinsic evaluation where we examine the quality of the models and the generated poetry. Second, we perform an extrinsic evaluation where we evaluate the generated output using human annotators, and compare it to human-generated poetry. 5.1 Intrinsic evaluation To evaluate the ability of both models to generate formulaic poetry that adheres to rhythmic rules, we compared sets of fifty sampled lines from each model. The first set was sampled from the phonetic-level model trained on Iambic poetry. The second set was sampled from the characterlevel model, constrained to Iambic form. For com173 Word Line Coverage Wikipedia 64.84% 83.35% 97.53% Sonnets 85.95% 80.32% 99.36% Table 2: Error when transliterating text into phonemes and reconstructing back into text. parison, and to act as a baseline, we also sampled from the unconstrained character model. We created gold-standard syllabic classifications by recording each line spoken-aloud, and marking each syllable as either stressed or unstressed. We then compared these observations to loose Iambic Pentameter (containing all four variants), to determine how many syllabic misclassifications existed on each line. This was done by speaking each line aloud, and noting where the speaker put stresses. As Table 1 shows, the constrained character level model generated the most formulaic poetry. Results from this model show that 70% of lines had zero mistakes, with frequency obeying an inverse power-law relationship with the number of errors. We can see that the phonetic model performed similarly, but produced more subtle mistakes than the constrained character model: many of the errors were single mistakes in an otherwise correct line of poetry. In order to investigate this further, we examined to what extent these errors are due to transliteration (i.e., the phonetic encoding and orthographic decoding steps). Table 2 shows the reconstruction accuracy per word and per line when transliterating either Wikipedia or Sonnets to phonemes using the CMU pronunciation dictionary and subsequently reconstructing English text using the ngram model5. Word accuracy reflects the frequency of perfect reconstruction, whereas per line tri-gram similarity (Kondrak, 2005) reflects the overall reconstruction. Coverage captures the percentage of in-vocabulary items. The relatively low per-word accuracy achieved on the Wikipedia corpus is likely due to the high frequency of out-ofvocabulary words. The results show that a significant number of errors in the phonetic-level model are likely to be caused by transliteration mistakes. 5Obviously, calculating this value for the character-level model makes no sense, since no transliteration occurs in that case. 5.2 Extrinsic evaluation We conducted an indistinguishability study with a selection of automatically generated poetry and human poetry. As extrinsic evaluations are expensive and the phonetic model was unlikely to do well (as illustrated in Figure 4e: the model generates good Iambic form, but not very good English), we only evaluate on the constrained characterlevel model. Poetry was generated with a variety of themes and poetic devices (see supplementary material). The aim of the study was to determine whether participants could distinguish between human and machine-generated poetry, and if so to what extent. A set of 70 participants (of whom 61 were English native speakers) were each shown a selection of randomly chosen poetry segments, and were invited to classify them as either human or generated. Participants were recruited from friends and people within poetry communities within the University of Cambridge, with an age range of 17 to 80, and a mean age of 29. Our participants were not financially incentivised, perceiving the evaluation as an intellectual challenge. In addition to the classification task, each participant was also invited to rate each poem on a 1-5 scale with respect to three criteria, namely readability, form and evocation (how much emotion did a poem elicit). We naively consider the overall quality of a poem to be the mean of these three measures. We used a custom web-based environment, built specifically for this evaluation6, which is illustrated in Figure 5. Based on human judgments, we can determine whether the models presented in this work can produce poetry of a similar quality to humans. To select appropriate human poetry that could be meaningfully compared with the machinegenerated poetry, we performed a comprehension test on all poems used in the evaluation, using the Dale-Chall readability formula (Dale and Chall, 1948). This formula represents readability as a function of the complexity of the input words. We selected nine machine-generated poems with a high readability score. The generated poems produced an average score of 7.11, indicating that readers over 15 years of age should easily be able to comprehend them. For our human poems, we focused explicitly on poetry where greater consideration is placed on 6http://neuralpoetry.getforge.io/ 174 (a) The crow crooked on more beautiful and free, He journeyed off into the quarter sea. his radiant ribs girdled empty and very least beautiful as dignified to see. (c) Man with the broken blood blue glass and gold. Cheap chatter chants to be a lover do. (e) The son still streams and strength and spirit. The ridden souls of which the fills of. (b) Is that people like things (are the way we to figure it out) and I thought of you reading and then is your show or you know we will finish along will you play. (d) How dreary to be somebody, How public like a frog To tell one’s name the livelong day To an admiring bog. Figure 4: Examples of automatically generated and human generated poetry. (a) Character-level model - Strict rhythm regularisation - Iambic - No Theme. (b) Character-level model - Strict rhythm regularisation - Anapest. (c) Character-level model - Boosted alliteration/assonance. (d) Emily Dickinson - I’m nobody, who are you? (e) Phonetic-level model - Nonsensical Iambic lines. Figure 5: The experimental environment for asking participants to distinguish between automatically generated and human poetry. prosodic elements like rhythm and rhyme than semantic content (known as “nonsense verse”). We randomly selected 30 poems belonging to that category from the website poetrysoup.com, of which eight were selected for the final comparison based on their comparable readability score. The selected poems were segmented into passages of between four and six lines, to match the length of the generated poetry segments. An example of such a segment is shown in Figure 4d. The human poems had an average score of 7.52, requiring a similar level of English aptitude to the generated texts. The performance of each human poem, alongside the aggregated scores of the generated poems, is illustrated in Table 3. For the human poems, our group of participants guessed correctly that they were human 51.4% of the time. For the generated poems, our participants guessed correctly 46.2% of the time that they were machine generated. To determine whether our results were statistically significant, we performed a Chi2 test. This resulted in a p-value of 0.718. This indicates that our participants were unable to tell the difference between human and generated poetry in any significant way. Although our participants generally considered the human poems to be of marginally higher quality than our generated poetry, they were unable to effectively distinguish between them. Interestingly, our results seem to suggest that our participants consider the generated poems to be more ‘human-like’ than those actually written by humans. In addition, the poem with the highest overall quality rating is a machine generated one. This shows that our approach was effective at generating high-quality rhythmic verse. It should be noted that the poems that were most ‘human-like’ and most aesthetic respectively were generated by the neural character model. Generally the set of poetry produced by the neural character model was slightly less readable and emotive than the human poetry, but had above average form. All generated poems included in this evaluation can be found in the supplementary material, and our code is made available online7. 7https://github.com/JackHopkins/ACLPoetry 175 Poet Title Human Readability Emotion Form Generated Best 0.66 0.60 -0.77 0.90 G. M. Hopkins Carrion Comfort 0.62 -1.09 1.39 -1.55 J. Thornton Delivery of Death 0.60 0.26 -1.38 -0.65 Generated Mean 0.54 -0.28 -0.30 0.23 M. Yvonne Intricate Weave 0.53 2.38 0.94 -1.67 E. Dickinson I’m Nobody 0.52 -0.46 0.92 0.44 G. M. Hopkins The Silver Jubilee 0.52 0.71 -0.33 0.65 R. Dryden Mac Flecknoe 0.51 -0.01 0.35 -0.78 A. Tennyson Beautiful City 0.48 -1.05 0.97 -1.26 W. Shakespeare A Fairy Song 0.45 0.65 1.30 1.18 Table 3: Proportion of people classifying each poem as ‘human’, as well as the relative qualitative scores of each poem as deviations from the mean. 6 Conclusions Our contributions are twofold. First, we developed a neural language model trained on a phonetic transliteration of poetic form and content. Although example output looked promising, this model was limited by its inability to generalise to novel forms of verse. We then proposed a more robust model trained on unformed poetic text, whose output form is constrained at sample time. This approach offers greater control over the style of the generated poetry than the earlier method, and facilitates themes and poetic devices. An indistinguishability test, where participants were asked to classify a randomly selected set of human “nonsense verse” and machine-generated poetry, showed generated poetry to be indistinguishable from that written by humans. In addition, the poems that were deemed most ‘humanlike’ and most aesthetic were both machinegenerated. In future work, it would be useful to investigate models based on morphemes, rather than characters, which offers potentially superior performance for complex and rare words (Luong et al., 2013), which are common in poetry. References Hideaki Aoyama and John Constable. 1998. Word length frequency and distribution in english: Observations, theory, and implications for the construction of verse lines. arXiv preprint cmp-lg/9808004 . Gabriele Barbieri, Franc¸ois Pachet, Pierre Roy, and Mirko Degli Esposti. 2012. Markov constraints for generating lyrics with style. In Proceedings of the 20th European Conference on Artificial Intelligence. IOS Press, pages 115–120. Yoshua Bengio, Holger Schwenk, Jean-S´ebastien Sen´ecal, Fr´ederic Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. In Innovations in Machine Learning, Springer, pages 137–186. Alan W Black, Kevin Lenzo, and Vincent Pagel. 1998. Issues in building general letter to sound rules . Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Proceedings of ICLR . Simon Colton, Jacob Goodwin, and Tony Veale. 2012. Full face poetry generation. In Proceedings of the Third International Conference on Computational Creativity. pages 95–102. Eric Corlett and Gerald Penn. 2010. An exact a* method for deciphering letter-substitution ciphers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1040– 1047. Edgar Dale and Jeanne S Chall. 1948. A formula for predicting readability: Instructions. Educational research bulletin pages 37–54. Pablo Gerv´as. 2000. Wasp: Evaluation of different strategies for the automatic generation of spanish verse. In Proceedings of the AISB-00 Symposium on Creative & Cultural Aspects of AI. pages 93–100. 176 Marjan Ghazvininejad, Xing Shi, Yejin Choi, and Kevin Knight. 2016. Generating topical poetry. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 1183–1191. Raghav Goyal, Marc Dymetman, and Eric Gaussier. 2016. Natural language generation through character-based rnns with finite-state prior knowledge. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. Osaka, Japan, pages 1083– 1092. Erica Greene, Tugba Bodrumlu, and Kevin Knight. 2010. Automatic analysis of rhythmic poetry with applications to generation and translation. In Proceedings of the 2010 conference on empirical methods in natural language processing. Association for Computational Linguistics, pages 524–533. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2015. Character-aware neural language models. arXiv preprint arXiv:1508.06615 . Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for decipherment problems. In Proceedings of the COLING/ACL on Main conference poster sessions. Association for Computational Linguistics, pages 499– 506. Grzegorz Kondrak. 2005. N-gram similarity and distance. In String processing and information retrieval. Springer, pages 115–126. Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL. pages 104–113. Hisar Manurung, Graeme Ritchie, and Henry Thompson. 2000. Towards a computational model of poetry generation. Technical report, The University of Edinburgh. Stephen McGregor, Matthew Purver, and Geraint Wiggins. 2016. Process based evaluation of computer generated poetry. In The INLG 2016 Workshop on Computational Creativity in Natural Language Generation. page 51. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. volume 2, page 3. Joanna Misztal and Bipin Indurkhya. 2014. Poetry generation system with an emotional personality. In Proceedings of the Fourth International Conference on Computational Creativity. Cory Myers, Lawrence R Rabiner, and Aaron E Rosenberg. 1980. Performance tradeoffs in dynamic time warping algorithms for isolated word recognition. Acoustics, Speech and Signal Processing, IEEE Transactions on 28(6):623–635. Yael Netzer, David Gabay, Yoav Goldberg, and Michael Elhadad. 2009. Gaiku: Generating haiku with word associations norms. In Proceedings of the Workshop on Computational Approaches to Linguistic Creativity. Association for Computational Linguistics, pages 32–39. Sujith Ravi and Kevin Knight. 2009. Learning phoneme mappings for transliteration without parallel data. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 37–45. Holger Schwenk and Jean-Luc Gauvain. 2005. Training neural network language models on very large corpora. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 201–208. Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). pages 1017–1024. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Jukka M Toivanen, Matti J¨arvisalo, Hannu Toivonen, et al. 2013. Harnessing constraint programming for poetry composition. In Proceedings of the Fourth International Conference on Computational Creativity. page 160. Willie Walker, Paul Lamere, and Philip Kwok. 2010. Freetts 1.2: A speech synthesizer written entirely in the java programming language. R Weide. 2005. The carnegie mellon pronouncing dictionary [cmudict. 0.6]. Paul J Werbos. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10):1550–1560. Xiaoyuan Yi, Ruoyu Li, and Maosong Sun. 2016. Generating chinese classical poems with rnn encoderdecoder. arXiv preprint arXiv:1604.01537 . 177 Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In EMNLP. pages 670–680. 178
2017
16
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1745–1755 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1160 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1745–1755 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1160 Generic Axiomatization of Families of Noncrossing Graphs in Dependency Parsing Anssi Yli-Jyr¨a University of Helsinki, Finland [email protected] Carlos G´omez-Rodr´ıguez Universidade da Coru˜na, Spain [email protected] Abstract We present a simple encoding for unlabeled noncrossing graphs and show how its latent counterpart helps us to represent several families of directed and undirected graphs used in syntactic and semantic parsing of natural language as contextfree languages. The families are separated purely on the basis of forbidden patterns in latent encoding, eliminating the need to differentiate the families of non-crossing graphs in inference algorithms: one algorithm works for all when the search space can be controlled in parser input. 1 Introduction Dependency parsing has received wide attention in recent years, as accurate and efficient dependency parsers have appeared that are applicable to many languages. Traditionally, dependency parsers have produced syntactic analyses in tree form, including exact inference algorithms that search for maximum projective trees (Eisner and Satta, 1999) and maximum spanning trees (McDonald et al., 2005) in weighted digraphs, as well as greedy and beamsearch approaches that forgo exact search for extra efficiency (Zhang and Nivre, 2011). Recently, there has been growing interest in providing a richer analysis of natural language by going beyond trees. In semantic dependency parsing (Oepen et al., 2015; Kuhlmann and Oepen, 2016), the desired syntactic representations can have indegree greater than 1 (re-entrancy), suggesting the search for maximum acyclic subgraphs (Schluter, 2014, 2015). As this inference task is intractable (Guruswami et al., 2011), noncrossing digraphs have been studied instead, e.g. by Kuhlmann and Johnsson (2015) who provide a O(n3) parser for maximum noncrossing acyclic subgraphs. Yli-Jyr¨a (2005) studied how to axiomatize dependency trees as a special case of noncrossing digraphs. This gave rise to a new homomorphic representation of context-free languages that proves the classical Chomsky and Sch¨utzenberger theorem using a quite different internal language. In this language, the brackets indicate arcs in a dependency tree in a way that is reminiscent to encoding schemes used earlier by Greibach (1973) and Oflazer (2003). Cubic-time parsing algorithms that are incidentally or intentionally applicable to this kind of homomorphic representations have been considered, e.g., by Nederhof and Satta (2003), Hulden (2011), and Yli-Jyr¨a (2012). Extending these insights to arbitrary noncrossing digraphs, or to relevant families of them, is far from obvious. In this paper, we develop (1) a linear encoding supporting general noncrossing digraphs, and (2) show that the encoded noncrossing digraphs form a context-free language. We then give it (3) two homomorphic, nonderivative representations and use the latent local features of the latter to characterize various families of digraphs. Apart from the obvious relevance to the theory of context-free languages, this contribution has the practical potential to enable (4) generic contextfree parsers that produce different families of noncrossing graphs with the same set of inference rules while the search space in each case is restricted with lexical features and the grammar. Outline After some background on graphs and parsing as inference (Section 2), we use an ontology of digraphs to illustrate natural families of noncrossing digraphs in Section 3. We then develop, in Section 4, the first latent contextfree representation for the set of noncrossing digraphs, then extended in Section 5 with additional latent states supporting our finite-state axiomatization of digraph properties, and allowing us to 1745 control the search space using the lexicon. The experiments in Section 6 cross-validate our axioms and sample the growth of the constrained search spaces. Section 7 outlines the applications for practical parsing, and Section 8 concludes. 2 Background Graphs and Digraphs A graph is a pair (V,E) where V is a finite set of vertices and E ⊆ {{u,v} ⊆V} is a set of edges. A sequence of edges of the form {v0,v1}, {v1,v2}, ..., {vm−1,vm}, with no repetitions in v1,...,vm, is a path between vertices v0 and vm and empty if m = 0. A graph is a forest if no vertex has a non-empty path to itself and connected if all pairs of vertices have a path. A tree is a connected forest. A digraph is a pair (V,A) where A ⊆V ×V is a set of arcs u →v, thus a directed graph. Its underlying graph, (V,EA), has edges EA = {{u,v} | (u,v) ∈A}. A sequence of arcs v0 →v1,v1 → v2,...,vm−1 →vm, with no repetitions in v1,...,vm, is a directed path, and empty if m = 0. A digraph without self-loops v →v is loop-free (property DIGRAPHLF). We will focus on loopfree digraphs unless otherwise specified, and denote them just by DIGRAPH for brevity. A digraph is d-acyclic (ACYCD), aka a dag if no vertex has a non-empty directed path to itself, uacyclic (ACYCU) aka a m(ixed)-forest if its underlying graph is a forest, and weakly connected (w.c., CONNW) if its underlying graph is connected. Dependency Parsing The complete digraph GS(V,A) of a sentence S = x1...xn consists of vertices V = {1,...,n} and all possible arcs A = V × V −{(i,i)}. The vertex i ∈V corresponds to the word xi and the arc i →j ∈A corresponds to a possible dependency between the words xi and xj. The task of dependency parsing is to find a constrained subgraph G′ S(V,A′) of the complete digraph GS of the sentence. The standard solution is a rooted directed tree called a dependency tree or a dag called a dependency graph. Constrained Inference In arc-factored parsing (McDonald et al., 2005), each possible arc i →j is equipped with a positive weight wij, usually computed as a weighted sum wij = w · Φ(S,i →j) where w is a weight vector and Φ(x,i →j) a feature vector extracted from the sentence x, considering the dependency relation from word xi to word xj. Parsing then consists in finding an arc subset A′ ⊆A that gives us a constrained subgraph (V,A′) ∈Constrained(V,A) of the complete digraph (V,A) with maximum sum of arc weights: (V,A′) = argmax (V,A′) ∈Constrained(V,A) ∑ i→j∈A′ wi,j. The complexity of this inference task depends on the constraints imposed on the subgraph. Under no constraints, we simply set A′ = A. Inference over dags is intractable (Guruswami et al., 2011). Efficient solutions are known for projective trees (Eisner, 1996), various classes of mildly nonprojective trees (G´omez-Rodr´ıguez, 2016), unrestricted spanning trees (McDonald et al., 2005), and both unrestricted and weakly connected noncrossing dags (Kuhlmann and Johnsson, 2015). Parsimony Semantic parsers must be able to produce more than projective trees because the share of projective trees is pretty low (under 3%) in semantic graph banks (Kuhlmann and Johnsson, 2015). However, if we know that the parses have some restrictions, it is better to use them to restrict the search space as much as possible. There are two strategies for reducing the search space. One is to develop a specialized inference algorithm for a particular natural language or family of dags, such as weakly connected graphs (Kuhlmann and Johnsson, 2015). The other strategy is to control the local complexity of digraphs through lexical categories (Baldridge and Kruijff, 2003) or equivalent mechanisms. This strategy produces a more sensitive model of the language, but requires a principled insight on how the complexity of digraphs can be characterized. 3 Constraints on the Search Space We will now present a classification of digraphs on the basis of their formal properties. The Noncrossing Property For convenience, graphs and digraphs may be ordered like in a complete digraph of a sentence. Two edges {i, j}, {k,l} in an ordered graph or arcs i → j,k →l in an ordered digraph are said to be crossing if min{i, j} < min{k,l} < max{i, j} < max{k,l}. A graph or digraph is noncrossing if it has no crossing edges or arcs. Noncrossing (di)graphs (NC-(DI)GRAPH) are the largest possible (di)graphs that can be drawn on a circle without crossing arcs. In the following, we assume that all digraphs and graphs are noncrossing. 1746 An arc x →y is (properly) covered by an arc z → t if ({x,y} ̸= {z,t}) and min{z,t} ≤min{x,y} ≤ max{x,y} ≤max{z,t}. Ontology Fig. 1 presents an ontology of such families of loop-free noncrossing digraphs that can be distinguished by digraphs with 5 vertices. In the digraph ontology, a multitree aka mangrove is a dag with the property of being strongly unambiguous (UNAMBS), which asserts that, given two distinct vertices, there is at most one repeat-free path between them (Lange, 1997).1 A polytree (Rebane and Pearl, 1987) is a multitree whose underlying graph is a tree. The out property (OUT) of a digraph (V,E) means that no vertex i ∈V has two incoming arcs { j,k} →i s.t. j ̸= k. NC-DIGRAPH +5460 CONNW +43571 UNAMBS +80 ORIENTED +140 ACYCU +1200 OUT +10 w.c.unamb. +600 w.c.or. +1160 unamb.or. +80 ACYCD +840 out oriented +130 out m-forest +435 mixed tree +3355 multitree +10 w.c.dag +2960 w.c.unamb.or. +370 out mixed tree +220 w.c. out oriented +132 w.c.multitree +50 or.forest +300 polytree +605 out or.forest +481 out or.tree +275 Figure 1: Basic properties split the set of 62464 noncrossing digraphs for 5 vertices into 23 classes An ordered digraph is weakly projective (PROJW) if for all vertices i, j and k, if k →j →i, then either {i, j} < k or {i, j} > k. In other words, the constraint, aka the outside-to-inside constraint (Yli-Jyr¨a, 2005), states that no outgoing arc of a vertex properly covers an incoming arc. This is implied by a stronger constraint known as Harper, Hays, Lecerf and Ihm projectivity (Marcus, 1967). We can embed the ontology of graphs (unrestricted, connected, forests and trees) into the ontology of digraphs by viewing an undirected graph (V,E) as an inverse digraph (V,{(i, j),(j,i) | {i, j} ∈E}). This kind of digraph has an inverse property (INV). Its opposite is an oriented (or.) digraph (V,A) where i →j ∈A implies j →i /∈A (defines the property ORIENTED). Out forests and trees are, by convention, oriented digraphs with an underlying forest or tree, respectively. 1A different definition forbids diamonds as minors. Distinctive Properties A few important properties of digraphs are local and can be verified by inspecting each vertex separately with its incident arcs. These include (i) the out property (OUT), (ii) the nonstandard projectivity property (PROJW), (iii) the inverse property (INV) and (iv) the orientedness (or.) property. Properties UNAMBS, ACYCD, CONNW, and ACYCU are nonlocal properties of digraphs and cannot be generally verified locally, through finite spheres of vertices (Gr¨adel et al., 2005). The following proposition covers the configurations that we have to detect in order to decide the nonlocal properties of noncrossing digraphs. Proposition 1. Let G = (V,E) be a noncrossing digraph. • If G /∈UNAMBS, then the digraph contains one of the following four configurations or their reversals: u v y u v y u v y u v x y • If G /∈ACYCD, then the graph contains one of the configurations u v y u v y u v • If G /∈ACYCU, then the underlying graph contains the following configuration: u v y • If G /∈CONNW, then the underlying graph contains one of the following configurations: ... v y ... no arc no arc ... v ... no arc no arc Proposition 1 gives us a means to implement the property tests in practice. It tells us intuitively that although the paths can be arbitrarily long, any underlying cycle containing more than 2 arcs consists of one covering arc and a linear chain of edges between its end points. 4 The Set of Digraphs as a Language In this section, we show that the set of noncrossing digraphs is isomorphic to an unambiguous context-free language over a bracket alphabet. 4.1 Basic Encoding Any noncrossing ordered graph ([1,...,n],E), even with self-loops, can be encoded as a string of brackets using the algorithm enc in Fig. 2. For example, the output for the ordered graph 1747 func enc(n,E): func dec(stdin): for i in [1,...,n]: n = 1; E = {}; s = [] for j in [i-1,...,2,1]: while c in stdin: if {j,i} in E: if c == "[": print "]" s.push(n) for j in [n,n-1,...,i+1]: if c == "]": if {i,j} in E: i = s.pop() print "[" E.insert((i,n)) if {i,i} in E: if c == "{": print "[]" n = n + 1 if i<n: print "{}" return (n,E) Figure 2: The encoding and decoding algorithms 1 2 3 4 n = 4, E =  {1,2}, {2,2} {2,4}, {1,4}  is the string [[{}][[]{}{}]]. Intuitively, pairs of brackets of the form {} can be interpreted as spaces between vertices, and then each set of matching brackets [...] encodes an arc that covers the spaces represented inside the brackets. Any noncrossing ordered digraph ([1,...,n],A) can be encoded with slight modifications to the algorithm. Instead of printing [ ] for an edge {i, j} ∈EA, i ≤j, the algorithm should now print / > if (i, j) ∈A,(j,i) ̸∈A; < / if (i, j) /∈A,(j,i) ∈A; [ ] if (i, j),( j,i) ∈A. In this way, we can simply encode the digraph ({1,2,3,4},{(1,2),(2,2),(4,1),(4,2)}) as the string </{}><[]{}{}//. Proposition 2. The encoding respects concatenation where the adjacent nonempty operands have a common vertex. Context-Freeness Arbitrary strings with balanced brackets form a context-free language that is known, generically, as a Dyck language. It is easy to see that the graphs NC-GRAPH are encoded with strings that belong to the Dyck language D2 generated by the context-free grammar: S →[S]S | {S}S | ε. The encoded graphs, LNC-GRAPH, are, however, generated exactly by the context-free grammar S →[S′] S | {} S | ε, S′ →[S′] T | {} S, T →[S′] S | {} S. This language is an unambiguous context-free language. Proposition 3. The encoded graphs, LNC-GRAPH, make an unambiguous context-free language. The practical significance of Proposition 3 is that there is a bijection between LNC-GRAPH and the derivation trees of a context-free grammar. 4.2 Bracketing Beyond the Encoding Non-Derivational Representation A nonderivational representation for any context-free language L has been given by Chomsky and Sch¨utzenberger (1963). This replaces the stack with a Dyck language D and the grammar rules with co-occurrence patterns specified by a regular language Reg. To hide the internal alphabet from the strings of the represented language, there is a homomorphism that cleans the internal strings of Reg and D from internal markup to get actual strings of the target language: LNC-GRAPH = h(D∩Reg). To make this concrete, replace the previous context free grammar by S′′ →[′S′]′ S | {} S | ε, S →[S′] S | {} S | ε, S′ →[′S′]′ T | {} S, T → [S′] S | {} S. The homomorphism h (Fig. 3a) would now relate this language to the original language, mapping the string [′[′{}]′[[′{}]′{}]]′ to the string [[{}][[{}]{}]], for example. The Dyck language D = D3 checks that the internal brackets are balanced, and the regular component Reg (Fig. 3b) checks that the new brackets are used correctly. A similar representation for the language LNC-DIGRAPH of encoded digraphs can be obtained with straightforward extensions.                (a) (b) Figure 3: The h and Reg components The representation L = h(D ∩Reg) is unambiguous if, for every word w ∈L, the preimage h−1(w) ∩D ∩Reg is a single string. This implies that L is an unambiguous context-free language. Proposition 4. The set of encoded digraphs, LNC-DIGRAPH, has an unambiguous representation. Proposition 5. Let Li = h(D ∩Ri), i ∈{0,1,2} be unambiguous representations with R1,R2 ⊆R0. Then L3 = h(D ∩(R1 ∩R2)) is an unambiguous context-free language and the same as L1 ∩L2. Proof. It is immediate that L3 ⊆L1 ∩L2 and L3 is an unambiguous context-free language. To show that L1 ∩L2 ⊆L3, take an arbitrary s ∈L1 ∩L2. Since R1,R2 ⊆R0 there is a unique s′ ∈h−1(s) such that s′ ∈D∩(R1 ∩R2). Thus s ∈L3. 5 Latent Bracketing In this section, we extend the internal strings of the non-derivational representation of LNC-DIGRAPH in 1748 such a way that the configurations given in Proposition 1 can be detected locally from these. Classification of Underlying Chains A maximal linear chain is a maximally long sequence of one or more edges that correspond to an underlying left-to-right path in the underlying graph in such a way that no edge in this chain is properly covered by an edge that does not properly cover all the edges in the chain. For example, the graph [′[′{}]′[[′{}]′[{}]][[′{}]′{}[{}]]]′[{}[{ }]{}] I II III II III II I IV V VI contains six maximal linear chains, indicated with their Roman numbers on each arc. We decide nonlocal properties of noncrossing digraphs by recognizing maximal linear chains as parts of configurations presented in Proposition 1. Every loose chain (like V and VI) starts with a bracket that is adjacent to a }-bracket. Such a chain can contribute only a covering edge to an underlying cycle. In contrast, a bracket with an apostrophe marks the beginning of a non-loose chain that can either start at the first vertex, or share starting point with a covering chain. When a nonloose chain is covered, it can be touched twice by a covering edge. The prefixes of chains are classified incrementally, from left to right, with a finite automaton (Figure 4). All states of the automaton are final and correspond to distinct classes of the chains. These classes are encoded to an extended set of brackets.                                                                                                Figure 4: The finite automaton whose state 0 begins non-loose chains and state 1 loose chains The automaton is symmetric: states with uppercase names are symmetrically related with corresponding lowercase states. Thus, it suffices to define the initial and uppercase-named states: 0 the initial state for a non-loose chain; I a bidirectional chain: u ↔(v ↔)y; A a primarily bidirectional forward chain: u ↔v →y; F a forward chain: u →v →y; Q a primarily forward chain: u →v ↔(··· →)y; C a primarily forward 1-turn chain: u →v ←y; E a primarily forward 2-turn chain: u →v ←x →y; Z a 3-turn chain; 1 the initial (and only) state for a loose chain; Recognition of ambiguous paths in configurations u−−−−−→ ←−− →→v ←y and u −−−−−−−−−−→ ←−−−−−−− ←v →x ←←y involves three chain levels. To support the recognition, subtypes of edges are defined according to the chains they cover. The brackets >I’, \I’, >I, \I, \A, >a, \Q, >Q, >q,\q, >C, \c, \E, >e indicate edges that constitute a cycle with the chain they cover. The brackets >V’, \v’, >V, \v indicate edges that cover 2-turn chains. Not all states make these distinctions. Extended Representation The extended brackets encode the latent structure of digraphs: the orientation and the subtype of the edge and the class of the chain. The total alphabet Σ of the strings now contains the boundary brackets {} and 54 pairs of brackets (Figure 4) for edges from which we obtain a new Dyck language, D55, and an extended homomorphism hlat. The Reg component of the language representation is replaced with Reglat, that is, an intersection of (1) an inverse homomorphic image of Reg to strings over the extended alphabet, (2) a local language that constrains adjacent edges according to Figure 4, (3) a local language specifying how the chains start, and (4) a local language that distinguishes pure oriented edges from those that cover a cycle or a 2-turn chain. The new component requires only 24 states as a deterministic automaton. Proposition 6. hlat(D55 ∩Reglat) is an unambiguous representation for LNC-DIGRAPH. The internal language LNC-DIGRAPHlat = D55 ∩ Reglat is called the set of latent encoded digraphs. Example Here is a digraph with its latent encoding: <f′ [I′ | {z } 1 {}]I′ /0 /F′ | {z } 2 {} >F′ |{z} 3 {} <. |{z} 4 {} /. |{z} 5 {} >. |{z} 6 {}/. >0 /f′ | {z } 7 The brackets in the extended representation contain information that helps us recognize, through local patterns, that this graph has a directed cycle 1749 Forbidden patterns in noncrossing digraphs¸ Property Constraint language RlooseR a nonloose chain ACYCU AU = Σ∗−Σ∗RlooseRΣ∗ Rloose(no connecting edges) (a vertex without edges) CONNW CW = Σ∗−Σ∗Rloose(ε ∪BΣ∗)−(BΣ∗∪Σ∗B) RrightR/ RleftR> forward backward inverted arc ACYCD AD = Σ∗−Σ∗(RrightR/ ∪RleftR> ∪Σinv)Σ∗ RrightR> RleftR/ RvergentR forward backward con/divergent Rleft2R> Rright2R\ divergent backward forward divergent UNAMBS US = Σ∗−Σ∗(RrightR> ∪RleftR/ ∪RvergentR)Σ∗ −Σ∗(Rleft2R> ∪Rright2R\)Σ∗ L/L< R>R/ PROJW PW = Σ∗−Σ∗(L/L< ∪R>R/)Σ∗ (an arc without inverse) INV I = Σ∗−Σ∗ΣorΣ∗ (a state with more than 2 incoming arcs) OUT Out = Σ∗−Σ∗Σin(Σ−B)∗ΣinΣ∗ (an inverted edge) ORIENTED O = Σ∗−Σ∗ΣinvΣ∗ Table 1: Properties of encoded noncrossing digraphs as constraint languages (directed path 1 →2 →7 →1), is strongly ambiguous (two directed paths 2 →1 and 2 →7 →1) and is not weakly connected (vertices 5 and 6 are not connected to the rest of the digraph). Expressing Properties via Forbidden Patterns We now demonstrate that all the mentioned nonlocal properties of graphs have become local in the extended internal representation of the code strings LNC-DIGRAPH for noncrossing digraphs. These distinctive properties of graph families reduce to forbidden patterns in bracket strings and then compile into regular constraint languages. These are presented in Table 1. To keep the patterns simple, subsets of brackets are defined: L/ [-,/-brackets L< [-,<-brackets R> ]-,>-brackets R/ ]-,\-brackets B {, } R R> ∪R\ Rloose }, >., /., ]. Rloose R−Rloose Rright R reaching F,Q,I,A Rleft R reaching f,q,i,a Rright2 >P, >2, >E, \E, ]E Rleft2 \p, \2, \e, >e, ]e Σin L< ∪R> B Σ−B Rvergent non-’ R reaching I,Q,q,A,a,C,c Σor all brackets for oriented edges Σinv all brackets for inverted edges 6 Validation Experiments The current experiments were designed (1) to help in developing the components of Reglat and the constraint languages of axiomatic properties, (2) to validate the representation, the constraint languages and their unambiguity, (3) to learn about the ontology and (4) to sample the integer sequences associated with the cardinality of each family in the ontology. Finding the Components Representations of Reglat were built with scripts written using a finitestate toolkit (Hulden, 2009) that supports rapid exploration with regular languages and transducers. Validation of Languages Our scripts presented alternative approaches to compute languages of encoded digraphs with n vertices up to n = 9. We also implemented a Python script that enumerated elements of families of graphs up to n = 6. The solutions were used to cross-validate one another. The constraint Gn = B∗({}B∗)n−1 ensures nvertices in encoded digraphs. The finite set of encoded acyclic 5-vertex digraphs was computed with a finite-state approach (Yli-Jyr¨a et al., 2012) that takes the input projection of the composition Id(Reglat ∩AD∩G5)◦T55◦T55◦T55◦T55◦T55◦Id(ε) where Id defines an identity relation and transducer T55 eliminates matching adjacent brackets. This composition differs from the typical use where the purpose is to construct a regular relation (Kaplan and Kay, 1994) or its output projection (Roche, 1996; Oflazer, 2003). For digraphs with a lot of vertices, we had an option to employ a dynamic programming scheme (Yli-Jyr¨a, 2012) that uses weighted transducers. Building the Ontology To build the ontology in Figure 1 we first found out which combinations of digraph properties co-occur to define distinguishable families of digraphs. After the nodes of the 1750 lattice were found, we were able to see the partial order between these. Integer Sequences We sampled, for important families of digraphs, the prefixes of their related integer sequences. We found out that each family of graphs is pretty much described by its cardinality, see Table 2. In many cases, the number sequence was already well documented (OEIS Foundation Inc., 2017). 7 The Formal Basis of Practical Parsing While presenting a practical parser implementation is outside of the scope of this paper, which focuses in the theory, we outline in this section the aspects to take into account when applying our representation to build practical natural language parsers. Positioned Brackets In order to do inference in arc-factored parsing, we incorporate weights to the representation. For each vertex in Gn, the brackets are decorated with the respective position number. Then, we define an input-specific grammar representation where each pair of brackets in D gets an arc-factored weight given the directions and the vertex numbers associated with the brackets. Grammar Intersection We associate, to each Gn, a quadratic-size context-free grammar that generates all noncrossing digraphs with n vertices. This grammar is obtained by computing (or even precomputing) the intersection D55 ∩Reglat ∩Gn in any order, exploiting the closure of contextfree languages under intersection with regular languages (Bar-Hillel et al., 1961). The introduction of the position numbers and weights in the Dyck language gives us, instead, a weighted grammar and its intersection (Lang, 1994). This grammar is a compact representation for a finite set of weighted latent encoded digraphs. Additional constraints during the intersection tailors the grammar to different families of digraphs. Dynamic Programming The heaviest digraph is found with a dynamic programming algorithm that computes, for each nonterminal in the grammar, the weight of the heaviest subtree. A careful reader may notice some connections to Eisner algorithm (Eisner and Satta, 1999), context-free parsing through intersection (Nederhof and Satta, 2003), and a dynamic programming scheme that uses contracting transducers and factorized composition (Yli-Jyr¨a, 2012). Unfortunately, space does not permit discussing the connections here. Lexicalized Search Space In practical parsing, we want the parser behavior and the dependency structure to be sensitive to the lexical entries or features of each word. We can replace the generic vertex description B∗in Gn with subsets that depend on respective lexical entries. Graphical constraints can be applied to some vertices but relaxed for others. This application of current results gives a principled, graphically motivated solution to lexicalized control over the search space. 8 Conclusion We have investigated the search space of parsers that produce noncrossing digraphs. Parsers that can be adapted to different needs are less dependent on artificial assumptions on the search space. Adaptivity gives us freedom to model how the properties of digraphs are actually distributed in linguistic data. As the adaptive data analysis deserves to be treated in its own right, the current work focuses on the separation of the parsing algorithm from the properties of the search space. This paper makes four significant contributions. Contribution 1: Digraph Encoding The paper introduces, for noncrossing digraphs, an encoding that uses brackets to indicate edges. Bracketed trees are widely used in generative syntax, treebanks and structured document formats. There are established conversions between phrase structure and projective dependency trees, but the currently advocated edge bracketing is expressive and captures more than just projective dependency trees. This capacity is welcome as syntactic and semantic analysis with dependency graphs is a steadily growing field. The edge bracketing creates new avenues for the study of connections between noncrossing graphs and context-free languages, as well as their recognizable properties. By demonstrating that digraphs can be treated as strings, we suggest that practical parsing to these structures could be implemented with existing methods that restrict context-free grammars to a regular yield language. Contribution 2: Context-Free Properties Acyclicity and other important properties of noncrossing digraphs are expressible as unambiguous context-free sets of encoded noncrossing 1751 Table 2: Characterizations for some noncrossing families of digraphs and graphs Name Sequence prefix for n = 2,3,... Example Name Sequence prefix for n = 2,3,... Example digraph (KJ): 4,64,1792,62464,2437120,101859328 hlat(D55 ∩Gn ∩Reglat) 1 2 3 4 5 weakly projective digraph 4,36,480,7744,138880,2661376 hlat(D55 ∩Gn ∩Reglat ∩PW ) 1 2 3 4 5 w.c.digraph 3,54,1539,53298,2051406,84339468 hlat(D55 ∩Gn ∩Reglat ∩CW ) 1 2 3 4 5 w.p. w.c.digraph 3,26,339,5278,90686,1658772 hlat(D55 ∩Gn ∩Reglat ∩PW ∩CW ) 1 2 3 4 5 unamb.digr. 4,39,529,8333,142995,2594378 hlat(D55 ∩Gn ∩Reglat ∩US) 1 2 3 4 5 w.p. unamb.digr. 4,29,275,3008,35884,453489 hlat(D55 ∩Gn ∩Reglat ∩PW ∩US) 1 2 3 4 5 m-forest 4,37,469,6871,109369,1837396,32062711 hlat(D55 ∩Gn ∩Reglat ∩AU) 1 2 3 4 5 w.p. m-forest 4,29,273,2939,34273,421336 hlat(D55 ∩Gn ∩Reglat ∩PW ∩AU) 1 2 3 4 5 out digraph 4,27,207,1683,14229,123840,1102365 hlat(D55 ∩Gn ∩Reglat ∩Out) 1 2 3 4 5 w.p. out digraph 4,21,129,867,6177,45840,350379 hlat(D55 ∩Gn ∩Reglat ∩PW ∩Out) 1 2 3 4 5 or. digraph 3,27,405,7533,156735,3492639,77539113 hlat(D55 ∩Gn ∩Reglat ∩O) 1 2 3 4 5 w.p. or.digraph see w.p.dag hlat(D55 ∩Gn ∩Reglat ∩PW ∩O) see w.p.dag dags (A246756): 3,25,335,5521,101551 hlat(D55 ∩Gn ∩Reglat ∩AD) 1 2 3 4 5 w.p. dag 3,21,219,2757,38523, 574725, 8967675 hlat(D55 ∩Gn ∩Reglat ∩PW ∩AD) 1 2 3 4 5 w.c. dag (KJ): 2,18,242,3890,69074,1306466 hlat(D55 ∩Gn ∩Reglat ∩AD ∩CW ) 1 2 3 4 5 w.p. w.c. dag 2,14,142,1706,22554,316998,4480592 hlat(D55 ∩Gn ∩Reglat ∩PW ∩AD ∩CW ) 1 2 3 4 5 multitree 3,19,167,1721,19447,233283,2917843 hlat(D55 ∩Gn ∩Reglat ∩AD ∩US) see oriented forest or w.c. multitree w.p. multitree 3,17,129,1139,11005,112797,1203595 hlat(D55 ∩Gn ∩Reglat ∩PW ∩AD ∩US) 1 2 3 4 5 or.forest 3,19,165,1661,18191,210407,2528777 hlat(D55 ∩Gn ∩Reglat ∩AD ∪AU) 1 2 3 4 5 w.p. or.forest 3,17,127,1089,10127,99329,1010189 hlat(D55 ∩Gn ∩Reglat ∩PW ∩AD ∪AU) 1 2 3 4 5 w.c. multitree 2,12,98,930,9638,105798,1201062 hlat(D55 ∩Gn ∩Reglat ∩AD ∩US ∩CW ) 1 2 3 4 5 w.p. w.c. multitree 2,10,68,538,4650,42572,404354 hlat(D55 ∩Gn ∩Reglat ∩PW ∩AD ∩US ∩CW ) 1 2 3 4 5 out or.forest 3,16,105,756,5738,45088,363221 hlat(D55 ∩Gn ∩Reglat ∩AD ∩Out) 1 2 3 4 5 w.p. out or.forest (A003169): 3,14,79,494,3294,22952 hlat(D55 ∩Gn ∩Reglat ∩PW ∩AD ∩Out) 1 2 3 4 5 polytree (A153231): 2,12,96,880,8736,91392 hlat(D55 ∩Gn ∩Reglat ∩AD ∩CW ∩AU) 1 2 3 4 5 w.p. polytree (A027307):2,10,66,498,4066,34970 hlat(D55 ∩Gn ∩Reglat ∩PW ∩AD ∩CW ∩AU) 1 2 3 4 5 out or.tree (A174687): 2,9,48,275,1638,9996 hlat(D55 ∩Gn ∩Reglat ∩AD ∩CW ∩Out) 1 2 3 4 5 projective out or.tree (A006013): 2,7,30,143,728,3876,21318 hlat(D55 ∩Gn ∩Reglat ∩PW ∩AD ∩CW ∩Out) 1 2 3 4 5 graph (A054726): 2,8,48,352,2880,25216 hlat(D55 ∩Gn ∩Reglat ∩I) 1 2 3 4 5 connected graph (A007297): 1,4,23,156,1162,9192 hlat(D55 ∩Gn ∩Reglat ∩I ∩CW ) 1 2 3 4 5 forest (A054727): 2,7,33,181,1083,6854 hlat(D55 ∩Gn ∩Reglat ∩I ∩AU) 1 2 3 4 5 tree (A001764,YJ): 1,3,12,55,273,1428,7752 hlat(D55 ∩Gn ∩Reglat ∩I ∩AU ∩CW ) 1 2 3 4 5 A = (OEIS Foundation Inc., 2017), KJ = Kuhlmann (2015) or Kuhlmann and Johnsson (2015), YJ = Yli-Jyr¨a (2012) digraphs. This facilitates the incorporation of property testing to dynamic programming algorithms that implement exact inference. Descriptive complexity helps us understand to which degree various graphical properties are local and could be incorporated into efficient dynamic programming during exact inference. It is well known that acyclicity and connecticity are not definable in first-order logic (FO) while they can be defined easily in monadic second order logic (MSO) (Courcelle, 1997). MSO involves set-valued variables whose use in dynamic programming algorithms and tabular parsing is inefficient. MSO queries have a brute force transformation to first-order (FO) logic, but this does not generally help either as it is well known that MSO can express intractable problems. The interesting observation of the current work is that some MSO definable properties of digraphs become local in our extended encoding. This encoding is linear compared to the size of digraphs: each string over the extended bracket alphabet encodes a fixed assignment of MSO variables. The properties of noncrossing digraphs now reduce to properties of bracketed trees with linear amount of func noncrossing_ACYCU(n,E): for {u,y} in E and u < y: # covering edge [v,p] = [u,u] while p != -1: # chain continues [v,p] = [p,-1] for vv in [v+1,...,y]: # next vertex if {v,vv} in E and {v,vv} != {u,y}: if vv == y: return False # found cycle uvy p = vv # find longest edge return True # acyclic Figure 5: Testing ACYCU in logarithmic space latent information that is fixed for each digraph. A deeper explanation for our observation comes from the fact that the treewidth of noncrossing and other outerplanar graphs is bounded to 2. When the treewidth is bounded, all MSO definable properties, including the intractable ones, become linear time decidable for individual structures (Courcelle, 1990). They can also be decided in a logarithmic amount of writable space (Elberfeld et al., 2010), e.g. with element indices instead of sets. By combining this insight with Proposition 1, we obtain a logspace solution for testing acyclicity of a noncrossing graph (Figure 5). Although bounded treewidth is a weaker constraint than so-called bounded treedepth that would immediately guarantee first-order definabil1752 ity (Elberfeld et al., 2016), it can sometimes turn intractable search problems to dynamic programming algorithms (Akutsu and Tamura, 2012). In our case, Proposition 1 gave rise to unambiguous context-free subsets of LNC-DIGRAPH. These can be recognized with dynamic programming and used in efficient constrained inference when we add vertex indices to the brackets and weights to the grammar of the corresponding Dyck language. Contribution 3: Digraph Ontology The context-free properties of encoded digraphs have elegant nonderivative language representations and they generate a semi-lattice under language intersection. Although context-free languages are not generally closed under intersection, all combinations of the properties in this lattice are context-free and define natural families of digraphs. The nonderivative representations for our axiomatic properties share the same Dyck language D55 and homomorphism, but differ in terms of forbidden patterns. As a consequence, also any conjunctive combination of these two properties shares these components and thus define a context-free language. The obtained semilattice is an ontology of families of noncrossing digraphs. Our ontology contains important families of noncrossing digraphs used in syntactic and semantic dependency parsing: out trees, dags, and weakly connected digraphs. It shows the entailment between the properties and proves the existence of less known families of noncrossing digraphs such as strongly unambiguous digraphs and oriented graphs, multitrees, oriented forests and polytrees. These are generalizations of out oriented trees. However, these families can still be weakly projective. Table 2 shows integer sequences obtained by enumerating digraphs in each family. At least twelve of these sequences are previously known, which indicates that the families are natural. We used a finite-state toolkit to build the components of the nongenerative language representation for latent encoded digraphs and the axioms.2 Contribution 4: Generic Parsing The fourth contribution of this paper is to show that parsing algorithms can be separated from the formal properties of their search space. 2The finite-state toolkit scripts and a Python-based graph enumerator are available at https://github.com/amikael/ncdigraphs . All the presented families of digraphs can be treated by parsers and other algorithms (e.g. enumeration algorithms) in a uniform manner. The parser’s inference rules can stay constant and the choice of the search space is made by altering the regular component of the language representation. The ontology of the search space can be combined with a constraint relaxation strategy, for example, when an out tree is a preferred analysis, but a dag is also possible as an analysis when no tree is strong enough. The flexibility applies also to dynamic programming algorithms that complement with the encoding and allow inference of best dependency graphs in a family simply by intersection with a weighted CFG grammar for a Dyck language that models position-indexed edges of the complete digraph. Since the families of digraphs are distinguished by forbidden local patterns, the choice of search space can be made purely on lexical grounds, blending well with lexicalized parsing and allowing possibilities such as choosing, per each word, what kind of structures the word can go with. Future work We are planning to extend the coverage of the approach by exploring 1-endpointcrossing and MHk trees (Pitler et al., 2013; G´omez-Rodr´ıguez, 2016), and related digraphs — see (Yli-Jyr¨a, 2004; G´omez-Rodr´ıguez et al., 2011). Properties such as weakly projective, out, and strongly unambiguous prompt further study. An interesting avenue for future work is to explore higher order factorizations for noncrossing digraphs and the related inference. We would also like to have more insight on the transformation of MSO definable properties to the current framework and to logspace algorithms. Acknowledgements AYJ has received funding as Research Fellow from the Academy of Finland (dec. No 270354 - A Usable Finite-State Model for Adequate Syntactic Complexity) and Clare Hall Fellow from the University of Helsinki (dec. RP 137/2013). CGR has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 714150 - FASTPARSE) and from the TELEPARES-UDC project (FFI201451978-C2-2-R) from MINECO. The comments of Juha Kontinen, Mark-Jan Nederhof and the anonymous reviewers helped to improve the paper. 1753 References Tatsuya Akutsu and Takeyuki Tamura. 2012. A polynomial-time algorithm for computing the maximum common subgraph of outerplanar graphs of bounded degree. In Branislav Rovan, Vladimiro Sassone, and Peter Widmayer, editors, Mathematical Foundations of Computer Science 2012: 37th International Symposium, MFCS 2012, Bratislava, Slovakia, August 27-31, 2012. Proceedings, Springer Berlin Heidelberg, Berlin, Heidelberg, pages 76–87. https://doi.org/10.1007/978-3-64232589-2 10. Jason Baldridge and Geert-Jan M. Kruijff. 2003. Multi-modal combinatory categorial grammar. In Proceedings of EACL’03: the Tenth Conference on European Chapter of the Association for Computational Linguistics Volume 1. Association for Computational Linguistics, Budapest, Hungary, pages 211–218. https://doi.org/10.3115/1067807.1067836. Yehoshua Bar-Hillel, Micha Perles, and Eliahu Shamir. 1961. On formal properties of simple phrase structure grammars. Zeitschrift f¨ur Phonologie, Sprachwissenschaft und Kommunikationsforschung 14:113–124. Noam Chomsky and Marcel-Paul Sch¨utzenberger. 1963. The algebraic theory of context-free languages. Computer Programming and Formal Systems pages 118–161. Bruno Courcelle. 1990. The monadic second-order logic of graphs. I. recognizable sets of finite graphs. Information and Computation 85(1):12 – 75. https://doi.org/10.1016/0890-5401(90)90043H. Bruno Courcelle. 1997. The expression of graph properties and graph transformations in monadic secondorder logic. In G. Rozenberg, editor, Handbook of Graph Grammars and Computing by Graph Transformations, World Scientific, New-Jersey, London, volume 1, chapter 5, pages 313–400. Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th International Conference on Computational Linguistics (COLING-96). Copenhagen, Denmark, pages 340–345. http://aclweb.org/anthology/C/C96/C961058.pdf. Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and Head Automaton Grammars. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, College Park, Maryland, USA, pages 457–464. https://doi.org/10.3115/1034678.1034748. Michael Elberfeld, Martin Grohe, and Till Tantau. 2016. Where first-order and monadic secondorder logic coincide. ACM Trans. Comput. Logic 17(4):25:1–25:18. https://doi.org/10.1145/2946799. Michael Elberfeld, Andreas Jakoby, and Till Tantau. 2010. Logspace versions of the theorems of Bodlaender and Courcelle. In Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of Computer Science. IEEE Computer Society, Washington, DC, USA, FOCS ’10, pages 143–152. https://doi.org/10.1109/FOCS.2010.21. Carlos G´omez-Rodr´ıguez. 2016. Restricted non-projectivity: Coverage vs. efficiency. Computational Linguistics 42(4):809–817. https://doi.org/10.1162/COLI a 00267. Carlos G´omez-Rodr´ıguez, John A. Carroll, and David J. Weir. 2011. Dependency parsing schemata and mildly non-projective dependency parsing. Computational Linguistics 37(3):541–586. https://doi.org/10.1162/COLI a 00060. Erich Gr¨adel, P. G. Kolaitis, L. Libkin, M. Marx, J. Spencer, Moshe Y. Vardi, Y. Venema, and Scott Weinstein. 2005. Finite Model Theory and Its Applications (Texts in Theoretical Computer Science. An EATCS Series). Springer-Verlag New York, Inc., Secaucus, NJ, USA. Sheila Greibach. 1973. The hardest context-free language. SIAM Journal on Computing 2(4):304–310. https://doi.org/10.1137/0202025. Venkatesan Guruswami, Johan H˚astad, Rajsekar Manokaran, Prasad Raghavendra, and Moses Charikar. 2011. Beating the random ordering is hard: Every ordering CSP is approximation resistant. SIAM Journal on Computing 40(3):878914. https://doi.org/10.1137/090756144. Mans Hulden. 2009. Foma: a finite-state compiler and library. In Proceedings of the Demonstrations Session at EACL 2009. Association for Computational Linguistics, Athens, Greece, pages 29–32. http://www.aclweb.org/anthology/E09-2008. Mans Hulden. 2011. Parsing CFGs and PCFGs with a Chomsky-Sch¨utzenberger representation. In Zygmunt Vetulani, editor, Human Language Technology. Challenges for Computer Science and Linguistics: 4th Language and Technology Conference, LTC 2009, Poznan, Poland, November 68, 2009, Revised Selected Papers, Springer Berlin Heidelberg, Berlin, Heidelberg, pages 151–160. https://doi.org/10.1007/978-3-642-20095-3 14. Ronald M. Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computational Linguistics 20(3):331–378. http://dl.acm.org/citation.cfm?id=204915.204917. Marco Kuhlmann. 2015. Tabulation of noncrossing acyclic digraphs. arXiv:1504.04993. https://arxiv.org/abs/1504.04993. Marco Kuhlmann and Peter Johnsson. 2015. Parsing to noncrossing dependency graphs. Transactions of the Association for Computational Linguistics 3:559– 570. http://aclweb.org/anthology/Q/Q15/Q151040.pdf. 1754 Marco Kuhlmann and Stephan Oepen. 2016. Towards a catalogue of linguistic graph banks. Computational Linguistics 42(4):819–827. https://doi.org/10.1162/COLI a 00268. Bernard Lang. 1994. Recognition can be harder than parsing. Computational Intelligence 10(4):486–494. http://onlinelibrary.wiley.com/doi/10.1111/j.14678640.1994.tb00011.x/full. Klaus-J¨orn Lange. 1997. An unambiguous class possessing a complete set. In Morvan Reischuk, editor, STACKS’97 Proceedings. Springer, volume 1200 of Lecture Notes in Computer Science. http://dl.acm.org/citation.cfm?id=695352. S. Marcus. 1967. Algebraic Linguistics; Analytical Models, volume 29 of Mathematics in Science and Engineering. Academic Press, New York and London. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Vancouver, British Columbia, Canada, pages 523– 530. http://www.aclweb.org/anthology/H/H05/H051066.pdf. Mark-Jan Nederhof and Giorgio Satta. 2003. Probabilistic parsing as intersection. In 8th International Workshop on Parsing Technologies. LORIA, Nancy, France, pages 137–148. OEIS Foundation Inc. 2017. The on-line encyclopedia of integer sequences. http://oeis.org, read on 15 January 2017. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, and Zdenka Uresova. 2015. Semeval 2015 task 18: Broad-coverage semantic dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Association for Computational Linguistics, Denver, Colorado, pages 915–926. http://www.aclweb.org/anthology/S15-2153. Kemal Oflazer. 2003. Dependency parsing with an extended finite-state approach. Computational Linguistics 29(4):515–544. https://doi.org/10.1162/089120103322753338. Emily Pitler, Sampath Kannan, and Mitchell Marcus. 2013. Finding optimal 1-endpointcrossing trees. Transactions of the Association for Computational Linguistics 1:13–24. http://aclweb.org/anthology/Q13-1002. George Rebane and Judea Pearl. 1987. The recovery of causal poly-trees from statistical data. In Proceedings of the 3rd Annual Conference on Uncertainty in Artificial Intelligence (UAI 1987). Seattle, WA, pages 222–228. http://dl.acm.org/citation.cfm?id=3023784. Emmanuel Roche. 1996. Transducer parsing of free and frozen sentences. Natural Language Engineering 2(4):345–350. https://doi.org/10.1017/S1351324997001605. Natalie Schluter. 2014. On maximum spanning DAG algorithms for semantic DAG parsing. In Proceedings of the ACL 2014 Workshop on Semantic Parsing. Association for Computational Linguistics, Baltimore, MD, pages 61–65. http://www.aclweb.org/anthology/W/W14/W142412.pdf. Natalie Schluter. 2015. The complexity of finding the maximum spanning DAG and other restrictions for DAG parsing of natural language. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics, Denver, Colorado, pages 259– 268. http://www.aclweb.org/anthology/S15-1031. Anssi Yli-Jyr¨a. 2004. Axiomatization of restricted non-projective dependency trees through finite-state constraints that analyse crossing bracketings. In Geert-Jan M. Kruijff and Denys Duchier, editors, COLING 2004 Recent Advances in Dependency Grammar. COLING, Geneva, Switzerland, pages 25–32. https://www.aclweb.org/anthology/W/W04/W041504.pdf. Anssi Yli-Jyr¨a. 2005. Approximating dependency grammars through intersection of star-free regular languages. Int. J. Found. Comput. Sci. 16(3):565– 579. https://doi.org/10.1142/S0129054105003169. Anssi Yli-Jyr¨a. 2012. On dependency analysis via contractions and weighted FSTs. In Diana Santos, Krister Lind´en, and Wanjiku Ng’ang’a, editors, Shall We Play the Festschrift Game?, Essays on the Occasion of Lauri Carlson’s 60th Birthday. Springer, pages 133–158. https://doi.org/10.1007/978-3-64230773-7 10. Anssi Yli-Jyr¨a, Jussi Piitulainen, and Atro Voutilainen. 2012. Refining the design of a contracting finite-state dependency parser. In I˜naki Alegria and Mans Hulden, editors, Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing. Association for Computational Linguistics, Donostia–San Sebasti´an, Spain, pages 108–115. http://www.aclweb.org/anthology/W12-6218. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 188– 193. http://www.aclweb.org/anthology/P11-2033. 1755
2017
160
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1756–1765 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1161 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1756–1765 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1161 Semi-supervised sequence tagging with bidirectional language models Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power Allen Institute for Artificial Intelligence {matthewp,waleeda,chandrab,russellp}@allenai.org Abstract Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pretrained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks. We evaluate our model on two standard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers. 1 Introduction Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information (Mikolov et al., 2013; Pennington et al., 2014) and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks (Collobert et al., 2011). However, in many NLP tasks it is essential to represent not just the meaning of a word, but also the word in context. For example, in the two phrases “A Central Bank spokesman” and “The Central African Republic”, the word ‘Central’ is used as part of both an Organization and Location. Accordingly, current state of the art sequence tagging models typically include a bidirectional recurrent neural network (RNN) that encodes token sequences into a context sensitive representation before making token specific predictions (Yang et al., 2017; Ma and Hovy, 2016; Lample et al., 2016; Hashimoto et al., 2016). Although the token representation is initialized with pre-trained embeddings, the parameters of the bidirectional RNN are typically learned only on labeled data. Previous work has explored methods for jointly learning the bidirectional RNN with supplemental labeled data from other tasks (e.g., Søgaard and Goldberg, 2016; Yang et al., 2017). In this paper, we explore an alternate semisupervised approach which does not require additional labeled data. We use a neural language model (LM), pre-trained on a large, unlabeled corpus to compute an encoding of the context at each position in the sequence (hereafter an LM embedding) and use it in the supervised sequence tagging model. Since the LM embeddings are used to compute the probability of future words in a neural LM, they are likely to encode both the semantic and syntactic roles of words in context. Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% F1 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% F1) for the CoNLL 2000 Chunking task. As a secondary contribution, we show that using both forward and backward LM embeddings boosts performance over a forward only LM. We also demonstrate that domain specific pre-training is not necessary by applying a LM trained in the news domain to scientific papers. 1756 2 Language model augmented sequence taggers (TagLM) 2.1 Overview The main components in our language-modelaugmented sequence tagger (TagLM) are illustrated in Fig. 1. After pre-training word embeddings and a neural LM on large, unlabeled corpora (Step 1), we extract the word and LM embeddings for every token in a given input sequence (Step 2) and use them in the supervised sequence tagging model (Step 3). 2.2 Baseline sequence tagging model Our baseline sequence tagging model is a hierarchical neural tagging model, closely following a number of recent studies (Ma and Hovy, 2016; Lample et al., 2016; Yang et al., 2017; Chiu and Nichols, 2016) (left side of Figure 2). Given a sentence of tokens (t1, t2, . . . , tN) it first forms a representation, xk, for each token by concatenating a character based representation ck with a token embedding wk: ck = C(tk; θc) wk = E(tk; θw) xk = [ck; wk] (1) The character representation ck captures morphological information and is either a convolutional neural network (CNN) (Ma and Hovy, 2016; Chiu and Nichols, 2016) or RNN (Yang et al., 2017; Lample et al., 2016). It is parameterized by C(·, θc) with parameters θc. The token embeddings, wk, are obtained as a lookup E(·, θw), initialized using pre-trained word embeddings, and fine tuned during training (Collobert et al., 2011). To learn a context sensitive representation, we employ multiple layers of bidirectional RNNs. For each token position, k, the hidden state hk,i of RNN layer i is formed by concatenating the hidden states from the forward (−→ h k,i) and backward (←− h k,i) RNNs. As a result, the bidirectional RNN is able to use both past and future information to make a prediction at token k. More formally, for the first RNN layer that operates on xk to output hk,1: −→ h k,1 = −→ R 1(xk, −→ h k−1,1; θ− → R 1) ←− h k,1 = ←− R 1(xk, ←− h k+1,1; θ← − R 1) hk,1 = [−→ h k,1; ←− h k,1] (2) Step 2: Prepare word embedding and LM embedding for each token in the input sequence. The need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemThe need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM. The need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM.ented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM. The need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemThe need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM. The need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM.ented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM. The need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemThe need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM. The need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM.ented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM. The need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemThe need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM. The need to capture future context in the LM embeddings suggests itis beneficial to also consider a \textit{backward} LM in additionalto the traditional \textit{forward} LM. A backward LM predicts the previous token given the future context. Given a sentence with $N$ tokens, it computes\[P(t_{k-1} | t_k, t_{k+1}, ..., t_N).\]A backward LM can be implemented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM.ented in an analogous way to a forward LM and produces an embedding $\overleftarrow{\mathbf{h}}^{LM}_k$, for the sequence $(t_k, t_{k+1}, ..., t_N)$, the output embeddings of the top layer LSTM. unlabeled data Recurrent language model Word embedding model Step 1: Pretrain word embeddings and language model. New York is located … Sequence tagging model B-LOC E-LOC O O … input sequence output sequence Word embedding LM embedding Two representations of the word “York” Step 3: Use both word embeddings and LM embeddings in the sequence tagging model. New York is located … Figure 1: The main components in TagLM, our language-model-augmented sequence tagging system. The language model component (in orange) is used to augment the input token representation in a traditional sequence tagging models (in grey). The second RNN layer is similar and uses hk,1 to output hk,2. In this paper, we use L = 2 layers of RNNs in all experiments and parameterize Ri as either Gated Recurrent Units (GRU) (Cho et al., 2014) or Long Short-Term Memory units (LSTM) (Hochreiter and Schmidhuber, 1997) depending on the task. Finally, the output of the final RNN layer hk,L is used to predict a score for each possible tag using a single dense layer. Due to the dependencies between successive tags in our sequence labeling tasks (e.g. using the BIOES labeling scheme, it is not possible for I-PER to follow B-LOC), it is beneficial to model and decode each sentence jointly instead of independently predicting the label for each token. Accordingly, we add another layer with parameters for each label bigram, computing the sentence conditional random field (CRF) loss (Lafferty et al., 2001) using the forward-backward algorithm at training time, and using the Viterbi algorithm to find the most likely tag sequence at test time, similar to Collobert et al. (2011). 1757 New York is located ... Neural net Char CNN/ RNN Embedding Token embedding RNN Dense E-LOC B-LOC CRF bi-RNN (R2) Token representation New York is located ... Forward LM Backward LM h1 LM Concat LM embedding Sequence tagging Pre-trained bi-LM bi-RNN (R1) Sequence representation Concatenation Token representation New York is located ... Token representation h1,1 h2 LM h2,1 h1,2 h2,2 Figure 2: Overview of TagLM, our language model augmented sequence tagging architecture. The top level embeddings from a pre-trained bidirectional LM are inserted in a stacked bidirectional RNN sequence tagging model. See text for details. 2.3 Bidirectional LM A language model computes the probability of a token sequence (t1, t2, . . . , tN) p(t1, t2, . . . , tN) = N Y k=1 p(tk | t1, t2, . . . , tk−1). Recent state of the art neural language models (J´ozefowicz et al., 2016) use a similar architecture to our baseline sequence tagger where they pass a token representation (either from a CNN over characters or as token embeddings) through multiple layers of LSTMs to embed the history (t1, t2, . . . , tk) into a fixed dimensional vector −→ h LM k . This is the forward LM embedding of the token at position k and is the output of the top LSTM layer in the language model. Finally, the language model predicts the probability of token tk+1 using a softmax layer over words in the vocabulary. The need to capture future context in the LM embeddings suggests it is beneficial to also consider a backward LM in additional to the traditional forward LM. A backward LM predicts the previous token given the future context. Given a sentence with N tokens, it computes p(t1, t2, . . . , tN) = N Y k=1 p(tk | tk+1, tk+2, . . . , tN). A backward LM can be implemented in an analogous way to a forward LM and produces the backward LM embedding ←− h LM k , for the sequence (tk, tk+1, . . . , tN), the output embeddings of the top layer LSTM. In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., hLM k = [−→ h LM k ; ←− h LM k ]. Note that in our formulation, the forward and backward LMs are independent, without any shared parameters. 2.4 Combining LM with sequence model Our combined system, TagLM, uses the LM embeddings as additional inputs to the sequence tagging model. In particular, we concatenate the LM embeddings hLM with the output from one of the bidirectional RNN layers in the sequence model. In our experiments, we found that introducing the LM embeddings at the output of the first layer performed the best. More formally, we simply replace (2) with hk,1 = [−→ h k,1; ←− h k,1; hLM k ]. (3) There are alternate possibilities for adding the LM embeddings to the sequence model. One pos1758 sibility adds a non-linear mapping after the concatenation and before the second RNN (e.g. replacing (3) with f([−→ h k,1; ←− h k,1; hLM k ]) where f is a non-linear function). Another possibility introduces an attention-like mechanism that weights the all LM embeddings in a sentence before including them in the sequence model. Our initial results with the simple concatenation were encouraging so we did not explore these alternatives in this study, preferring to leave them for future work. 3 Experiments We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task (Sang and Meulder, 2003) and the CoNLL 2000 Chunking task (Sang and Buchholz, 2000). We report the official evaluation metric (micro-averaged F1). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options (e.g., Ratinov and Roth, 2009). Following Chiu and Nichols (2016), we use the Senna word embeddings (Collobert et al., 2011) and pre-processed the text by lowercasing all tokens and replacing all digits with 0. CoNLL 2003 NER. The CoNLL 2003 NER task consists of newswire from the Reuters RCV1 corpus tagged with four different entity types (PER, LOC, ORG, MISC). It includes standard train, development and test sets. Following previous work (Yang et al., 2017; Chiu and Nichols, 2016) we trained on both the train and development sets after tuning hyperparameters on the development set. The hyperparameters for our baseline model are similar to Yang et al. (2017). We use two bidirectional GRUs with 80 hidden units and 25 dimensional character embeddings for the token character encoder. The sequence layer uses two bidirectional GRUs with 300 hidden units each. For regularization, we add 25% dropout to the input of each GRU, but not to the recurrent connections. CoNLL 2000 chunking. The CoNLL 2000 chunking task uses sections 15-18 from the Wall Street Journal corpus for training and section 20 for testing. It defines 11 syntactic chunk types (e.g., NP, VP, ADJP) in addition to other. We randomly sampled 1000 sentences from the training set as a held-out development set. The baseline sequence tagger uses 30 dimensional character embeddings and a CNN with 30 filters of width 3 characters followed by a tanh non-linearity for the token character encoder. The sequence layer uses two bidirectional LSTMs with 200 hidden units. Following Ma and Hovy (2016) we added 50% dropout to the character embeddings, the input to each LSTM layer (but not recurrent connections) and to the output of the final LSTM layer. Pre-trained language models. The primary bidirectional LMs we used in this study were trained on the 1B Word Benchmark (Chelba et al., 2014), a publicly available benchmark for largescale language modeling. The training split has approximately 800 million tokens, about a 4000X increase over the number training tokens in the CoNLL datasets. J´ozefowicz et al. (2016) explored several model architectures and released their best single model and training recipes. Following Sak et al. (2014), they used linear projection layers at the output of each LSTM layer to reduce the computation time but still maintain a large LSTM state. Their single best model took three weeks to train on 32 GPUs and achieved 30.0 test perplexity. It uses a character CNN with 4096 filters for input, followed by two stacked LSTMs, each with 8192 hidden units and a 1024 dimensional projection layer. We use CNN-BIG-LSTM to refer to this language model in our results. In addition to CNN-BIG-LSTM from J´ozefowicz et al. (2016),1 we used the same corpus to train two additional language models with fewer parameters: forward LSTM-2048-512 and backward LSTM-2048-512. Both language models use token embeddings as input to a single layer LSTM with 2048 units and a 512 dimension projection layer. We closely followed the procedure outlined in J´ozefowicz et al. (2016), except we used synchronous parameter updates across four GPUs instead of asynchronous updates across 32 GPUs and ended training after 10 epochs. The test set perplexities for our forward and backward LSTM-2048-512 language models are 47.7 and 47.3, respectively.2 1https://github.com/tensorflow/models/ tree/master/lm_1b 2Due to different implementations, the perplexity of the forward LM with similar configurations in J´ozefowicz et al. (2016) is different (45.0 vs. 47.7). 1759 Model F1± std Chiu and Nichols (2016) 90.91 ± 0.20 Lample et al. (2016) 90.94 Ma and Hovy (2016) 91.37 Our baseline without LM 90.87 ± 0.13 TagLM 91.93 ± 0.19 Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unlabeled text. Model F1± std Yang et al. (2017) 94.66 Hashimoto et al. (2016) 95.02 Søgaard and Goldberg (2016) 95.28 Our baseline without LM 95.00 ± 0.08 TagLM 96.37 ± 0.05 Table 2: Test set F1 comparison on CoNLL 2000 Chunking task using only CoNLL 2000 data and unlabeled text. Training. All experiments use the Adam optimizer (Kingma and Ba, 2015) with gradient norms clipped at 5.0. In all experiments, we fine tune the pre-trained Senna word embeddings but fix all weights in the pre-trained language models. In addition to explicit dropout regularization, we also use early stopping to prevent over-fitting and use the following process to determine when to stop training. We first train with a constant learning rate α = 0.001 on the training data and monitor the development set performance at each epoch. Then, at the epoch with the highest development performance, we start a simple learning rate annealing schedule: decrease α an order of magnitude (i.e., divide by ten), train for five epochs, decrease α an order of magnitude again, train for five more epochs and stop. Following Chiu and Nichols (2016), we train each final model configuration ten times with different random seeds and report the mean and standard deviation F1. It is important to estimate the variance of model performance since the test data sets are relatively small. 3.1 Overall system results Tables 1 and 2 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables 3 and 4 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512). In the CoNLL 2003 NER task, our model scores 91.93 mean F1, which is a statistically significant increase over the previous best result of 91.62 ±0.33 from Chiu and Nichols (2016) that used gazetteers (at 95%, two-sided Welch t-test, p = 0.021). In the CoNLL 2000 Chunking task, TagLM achieves 96.37 mean F1, exceeding all previously published results without additional labeled data by more then 1% absolute F1. The improvement over the previous best result of 95.77 in Hashimoto et al. (2016) that jointly trains with Penn Treebank (PTB) POS tags is statistically significant at 95% (p < 0.001 assuming standard deviation of 0.1). Importantly, the LM embeddings amounts to an average absolute improvement of 1.06 and 1.37 F1 in the NER and Chunking tasks, respectively. Adding external resources. Although we do not use external labeled data or gazetteers, we found that TagLM outperforms previous state of the art results in both tasks when external resources (labeled data or task specific gazetteers) are available. Furthermore, Tables 3 and 4 show that, in most cases, the improvements we obtain by adding LM embeddings are larger then the improvements previously obtained by adding other forms of transfer or joint learning. For example, Yang et al. (2017) noted an improvement of only 0.06 F1 in the NER task when transfer learning from both CoNLL 2000 chunks and PTB POS tags and Chiu and Nichols (2016) reported an increase of 0.71 F1 when adding gazetteers to their baseline. In the Chunking task, previous work has reported from 0.28 to 0.75 improvement in F1 when including supervised labels from the PTB POS tags or CoNLL 2003 entities (Yang et al., 2017; Søgaard and Goldberg, 2016; Hashimoto et al., 2016). 3.2 Analysis To elucidate the characteristics of our LM augmented sequence tagger, we ran a number of additional experiments on the CoNLL 2003 NER task. How to use LM embeddings? In this experiment, we concatenate the LM embeddings at dif1760 F1 F1 Model External resources Without With ∆ Yang et al. (2017) transfer from CoNLL 2000/PTB-POS 91.2 91.26 +0.06 Chiu and Nichols (2016) with gazetteers 90.91 91.62 +0.71 Collobert et al. (2011) with gazetteers 88.67 89.59 +0.92 Luo et al. (2015) joint with entity linking 89.9 91.2 +1.3 Ours no LM vs TagLM unlabeled data only 90.87 91.93 +1.06 Table 3: Improvements in test set F1 in CoNLL 2003 NER when including additional labeled data or task specific gazetteers (except the case of TagLM where we do not use additional labeled resources). F1 F1 Model External resources Without With ∆ Yang et al. (2017) transfer from CoNLL 2003/PTB-POS 94.66 95.41 +0.75 Hashimoto et al. (2016) jointly trained with PTB-POS 95.02 95.77 +0.75 Søgaard and Goldberg (2016) jointly trained with PTB-POS 95.28 95.56 +0.28 Ours no LM vs TagLM unlabeled data only 95.00 96.37 +1.37 Table 4: Improvements in test set F1 in CoNLL 2000 Chunking when including additional labeled data (except the case of TagLM where we do not use additional labeled data). Use LM embeddings at F1± std input to the first RNN layer 91.55 ± 0.21 output of the first RNN layer 91.93 ± 0.19 output of the second RNN layer 91.72 ± 0.13 Table 5: Comparison of CoNLL-2003 test set F1 when the LM embeddings are included at different layers in the baseline tagger. ferent locations in the baseline sequence tagger. In particular, we used the LM embeddings hLM k to: • augment the input of the first RNN layer; i.e., xk = [ck; wk; hLM k ], • augment the output of the first RNN layer; i.e., hk,1 = [−→ h k,1; ←− h k,1; hLM k ],3 and • augment the output of the second RNN layer; i.e., hk,2 = [−→ h k,2; ←− h k,2; hLM k ]. Table 5 shows that the second alternative performs best. We speculate that the second RNN layer in the sequence tagging model is able to capture interactions between task specific context as expressed in the first RNN layer and general context as expressed in the LM embeddings in a way that improves overall system performance. These 3This configuration the same as Eq. 3 in §2.4. It was reproduced here for convenience. results are consistent with Søgaard and Goldberg (2016) who found that chunking performance was sensitive to the level at which additional POS supervision was added. Does it matter which language model to use? In this experiment, we compare six different configurations of the forward and backward language models (including the baseline model which does not use any language models). The results are reported in Table 6. We find that adding backward LM embeddings consistently outperforms forward-only LM embeddings, with F1 improvements between 0.22 and 0.27%, even with the relatively small backward LSTM-2048-512 LM. LM size is important, and replacing the forward LSTM-2048-512 with CNN-BIG-LSTM (test perplexities of 47.7 to 30.0 on 1B Word Benchmark) improves F1 by 0.26 - 0.31%, about as much as adding backward LM. Accordingly, we hypothesize (but have not tested) that replacing the backward LSTM-2048-512 with a backward LM analogous to the CNN-BIG-LSTM would further improve performance. To highlight the importance of including language models trained on a large scale data, we also experimented with training a language model on just the CoNLL 2003 training and development data. Due to the much smaller size of this data 1761 Forward language model Backward language model LM perplexity F1± std Fwd Bwd — — N/A N/A 90.87 ± 0.13 LSTM-512-256∗ LSTM-512-256∗ 106.9 104.2 90.79 ± 0.15 LSTM-2048-512 — 47.7 N/A 91.40 ± 0.18 LSTM-2048-512 LSTM-2048-512 47.7 47.3 91.62 ± 0.23 CNN-BIG-LSTM — 30.0 N/A 91.66 ± 0.13 CNN-BIG-LSTM LSTM-2048-512 30.0 47.3 91.93 ± 0.19 Table 6: Comparison of CoNLL-2003 test set F1 for different language model combinations. All language models were trained and evaluated on the 1B Word Benchmark, except LSTM-512-256∗which was trained and evaluated on the standard splits of the NER CoNLL 2003 dataset. set, we decreased the model size to 512 hidden units with a 256 dimension projection and normalized tokens in the same manner as input to the sequence tagging model (lower-cased, with all digits replaced with 0). The test set perplexities for the forward and backward models (measured on the CoNLL 2003 test data) were 106.9 and 104.2, respectively. Including embeddings from these language models decreased performance slightly compared to the baseline system without any LM. This result supports the hypothesis that adding language models help because they learn composition functions (i.e., the RNN parameters in the language model) from much larger data compared to the composition functions in the baseline tagger, which are only learned from labeled data. Importance of task specific RNN. To understand the importance of including a task specific sequence RNN we ran an experiment that removed the task specific sequence RNN and used only the LM embeddings with a dense layer and CRF to predict output tags. In this setup, performance was very low, 88.17 F1, well below our baseline. This result confirms that the RNNs in the baseline tagger encode essential information which is not encoded in the LM embeddings. This is unsurprising since the RNNs in the baseline tagger are trained on labeled examples, unlike the RNN in the language model which is only trained on unlabeled examples. Note that the LM weights are fixed in this experiment. Dataset size. A priori, we expect the addition of LM embeddings to be most beneficial in cases where the task specific annotated datasets are small. To test this hypothesis, we replicated the setup from Yang et al. (2017) that samples 1% of the CoNLL 2003 training set and compared the performance of TagLM to our baseline without LM. In this scenario, test F1 increased 3.35% (from 67.66 to 71.01%) compared to an increase of 1.06% F1 for a similar comparison with the full training dataset. The analogous increases in Yang et al. (2017) are 3.97% for cross-lingual transfer from CoNLL 2002 Spanish NER and 6.28% F1 for transfer from PTB POS tags. However, they found only a 0.06% F1 increase when using the full training data and transferring from both CoNLL 2000 chunks and PTB POS tags. Taken together, this suggests that for very small labeled training sets, transferring from other tasks yields a large improvement, but this improvement almost disappears when the training data is large. On the other hand, our approach is less dependent on the training set size and significantly improves performance even with larger training sets. Number of parameters. Our TagLM formulation increases the number of parameters in the second RNN layer R2 due to the increase in the input dimension h1 if all other hyperparameters are held constant. To confirm that this did not have a material impact on the results, we ran two additional experiments. In the first, we trained a system without a LM but increased the second RNN layer hidden dimension so that number of parameters was the same as in TagLM. In this case, performance decreased slightly (by 0.15% F1) compared to the baseline model, indicating that solely increasing parameters does not improve performance. In the second experiment, we decreased the hidden dimension of the second RNN layer in TagLM to give it the same number of parameters as the baseline no LM model. In this case, test F1 increased slightly to 92.00 ± 0.11 indicating that the additional parameters in TagLM are slightly hurting 1762 performance.4 Does the LM transfer across domains? One artifact of our evaluation framework is that both the labeled data in the chunking and NER tasks and the unlabeled text in the 1 Billion Word Benchmark used to train the bidirectional LMs are derived from news articles. To test the sensitivity to the LM training domain, we also applied TagLM with a LM trained on news articles to the SemEval 2017 Shared Task 10, ScienceIE.5 ScienceIE requires end-to-end joint entity and relationship extraction from scientific publications across three diverse fields (computer science, material sciences, and physics) and defines three broad entity types (Task, Material and Process). For this task, TagLM increased F1 on the development set by 4.12% (from 49.93 to to 54.05%) for entity extraction over our baseline without LM embeddings and it was a major component in our winning submission to ScienceIE, Scenario 1 (Ammar et al., 2017). We conclude that LM embeddings can improve the performance of a sequence tagger even when the data comes from a different domain. 4 Related work Unlabeled data. TagLM was inspired by the widespread use of pre-trained word embeddings in supervised sequence tagging models. Besides pre-trained word embeddings, our method is most closely related to Li and McCallum (2005). Instead of using a LM, Li and McCallum (2005) uses a probabilistic generative model to infer contextsensitive latent variables for each token, which are then used as extra features in a supervised CRF tagger (Lafferty et al., 2001). Other semisupervised learning methods for structured prediction problems include co-training (Blum and Mitchell, 1998; Pierce and Cardie, 2001), expectation maximization (Nigam et al., 2000; Mohit and Hwa, 2005), structural learning (Ando and Zhang, 2005) and maximum discriminant functions (Suzuki et al., 2007; Suzuki and Isozaki, 2008). It is easy to combine TagLM with any of the above methods by including LM embeddings as additional features in the discriminative components of the model (except for expectation maximization). A detailed discussion of semisupervised learning methods in NLP can be found 4A similar experiment for the Chunking task did not improve F1 so this conclusion is task dependent. 5https://scienceie.github.io/ in (Søgaard, 2013). Melamud et al. (2016) learned a context encoder from unlabeled data with an objective function similar to a bi-directional LM and applied it to several NLP tasks closely related to the unlabeled objective function: sentence completion, lexical substitution and word sense disambiguation. LM embeddings are related to a class of methods (e.g., Le and Mikolov, 2014; Kiros et al., 2015; Hill et al., 2016) for learning sentence and document encoders from unlabeled data, which can be used for text classification and textual entailment among other tasks. Dai and Le (2015) pre-trained LSTMs using language models and sequence autoencoders then fine tuned the weights for classification tasks. In contrast to our method that uses unlabeled data to learn token-in-context embeddings, all of these methods use unlabeled data to learn an encoder for an entire text sequence (sentence or document). Neural language models. LMs have always been a critical component in statistical machine translation systems (Koehn, 2009). Recently, neural LMs (Bengio et al., 2003; Mikolov et al., 2010) have also been integrated in neural machine translation systems (e.g., Kalchbrenner and Blunsom, 2013; Devlin et al., 2014) to score candidate translations. In contrast, TagLM uses neural LMs to encode words in the input sequence. Unlike forward LMs, bidirectional LMs have received little prior attention. Most similar to our formulation, Peris and Casacuberta (2015) used a bidirectional neural LM in a statistical machine translation system for instance selection. They tied the input token embeddings and softmax weights in the forward and backward directions, unlike our approach which uses two distinct models without any shared parameters. Frinken et al. (2012) also used a bidirectional n-gram LM for handwriting recognition. Interpreting RNN states. Recently, there has been some interest in interpreting the activations of RNNs. Linzen et al. (2016) showed that single LSTM units can learn to predict singular-plural distinctions. Karpathy et al. (2015) visualized character level LSTM states and showed that individual cells capture long-range dependencies such as line lengths, quotes and brackets. Our work complements these studies by showing that LM states are useful for downstream tasks as a way 1763 of interpreting what they learn. Other sequence tagging models. Current state of the art results in sequence tagging problems are based on bidirectional RNN models. However, many other sequence tagging models have been proposed in the literature for this class of problems (e.g., Lafferty et al., 2001; Collins, 2002). LM embeddings could also be used as additional features in other models, although it is not clear whether the model complexity would be sufficient to effectively make use of them. 5 Conclusion In this paper, we proposed a simple and general semi-supervised method using pre-trained neural language models to augment token representations in sequence tagging models. Our method significantly outperforms current state of the art models in two popular datasets for NER and Chunking. Our analysis shows that adding a backward LM in addition to traditional forward LMs consistently improves performance. The proposed method is robust even when the LM is trained on unlabeled data from a different domain, or when the baseline model is trained on a large number of labeled examples. Acknowledgments We thank Chris Dyer, Julia Hockenmaier, Jayant Krishnamurthy, Matt Gardner and Oren Etzioni for comments on earlier drafts that led to substantial improvements in the final version. References Waleed Ammar, Matthew E. Peters, Chandra Bhagavatula, and Russell Power. 2017. The AI2 system at SemEval-2017 Task 10 (ScienceIE): semisupervised end-to-end entity and relation extraction. In ACL workshop (SemEval). Rie Kubota Ando and Tong Zhang. 2005. A highperformance semi-supervised learning method for text chunking. In ACL. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. In JMLR. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In COLT. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. 2014. One billion word benchmark for measuring progress in statistical language modeling. CoRR abs/1312.3005. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. In TACL. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In SSST@EMNLP. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In EMNLP. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. In JMLR. Andrew M. Dai and Quoc V. Le. 2015. Semisupervised sequence learning. In NIPS. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In ACL. Volkmar Frinken, Alicia Forn´es, Josep Llad´os, and Jean-Marc Ogier. 2012. Bidirectional language model for handwriting recognition. In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR). Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2016. A joint many-task model: Growing a neural network for multiple nlp tasks. CoRR abs/1611.01587. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In HLT-NAACL. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9. Rafal J´ozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. CoRR abs/1602.02410. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP. Andrej Karpathy, Justin Johnson, and Li Fei-Fei. 2015. Visualizing and understanding recurrent networks. In ICLR workshop. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Jamie Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In NIPS. Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press. 1764 John D. Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL-HLT. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In ICML. Wei Li and Andrew McCallum. 2005. Semi-supervised sequence modeling with syntactic topic models. In AAAI. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. In TACL. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In EMNLP. Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In ACL. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional lstm. In CoNLL. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. Behrang Mohit and Rebecca Hwa. 2005. Syntax-based semi-supervised named entity tagging. In ACL. Kamal Nigam, Andrew Kachites McCallum, Sebastian Thrun, and Tom Mitchell. 2000. Text classification from labeled and unlabeled documents using em. Machine learning . Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. ´Alvaro Peris and Francisco Casacuberta. 2015. A bidirectional recurrent neural language model for machine translation. Procesamiento del Lenguaje Natural . David Pierce and Claire Cardie. 2001. Limitations of co-training for natural language learning from large datasets. In EMNLP. Lev-Arie Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In CoNLL. Hasim Sak, Andrew W. Senior, and Franoise Beaufays. 2014. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In INTERSPEECH. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task chunking. In CoNLL/LLL. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In CoNLL. Anders Søgaard. 2013. Semi-supervised learning and domain adaptation in natural language processing. Synthesis Lectures on Human Language Technologies . Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In ACL. Jun Suzuki, Akinori Fujino, and Hideki Isozaki. 2007. Semi-supervised structured output learning based on a hybrid generative and discriminative approach. In EMNLP-CoNLL. Jun Suzuki and Hideki Isozaki. 2008. Semi-supervised sequential labeling and segmentation using gigaword scale unlabeled data. In ACL. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In ICLR. 1765
2017
161
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1766–1776 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1162 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1766–1776 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1162 Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings He He and Anusha Balakrishnan and Mihail Eric and Percy Liang Computer Science Department, Stanford University {hehe,anusha28,meric,pliang}@cs.stanford.edu Abstract We study a symmetric collaborative dialogue setting in which two agents, each with private knowledge, must strategically communicate to achieve a common goal. The open-ended dialogue state in this setting poses new challenges for existing dialogue systems. We collected a dataset of 11K human-human dialogues, which exhibits interesting lexical, semantic, and strategic elements. To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses. Automatic and human evaluations show that our model is both more effective at achieving the goal and more human-like than baseline neural and rule-based models. 1 Introduction Current task-oriented dialogue systems (Young et al., 2013; Wen et al., 2017; Dhingra et al., 2017) require a pre-defined dialogue state (e.g., slots such as food type and price range for a restaurant searching task) and a fixed set of dialogue acts (e.g., request, inform). However, human conversation often requires richer dialogue states and more nuanced, pragmatic dialogue acts. Recent opendomain chat systems (Shang et al., 2015; Serban et al., 2015b; Sordoni et al., 2015; Li et al., 2016a; Lowe et al., 2017; Mei et al., 2017) learn a mapping directly from previous utterances to the next utterance. While these models capture open-ended aspects of dialogue, the lack of structured dialogue state prevents them from being directly applied to settings that require interfacing with structured knowledge. In order to bridge the gap between the two types Friends of agent A: Name School Major Company Jessica Columbia Computer Science Google Josh Columbia Linguistics Google ... ... ... ... A: Hi! Most of my friends work for Google B: do you have anyone who went to columbia? A: Hello? A: I have Jessica a friend of mine A: and Josh, both went to columbia B: or anyone working at apple? B: SELECT (Jessica, Columbia, Computer Science, Google) A: SELECT (Jessica, Columbia, Computer Science, Google) Figure 1: An example dialogue from the MutualFriends task in which two agents, A and B, each given a private list of a friends, try to identify their mutual friend. Our objective is to build an agent that can perform the task with a human. Crosstalk (Section 2.3) is italicized. of systems, we focus on a symmetric collaborative dialogue setting, which is task-oriented but encourages open-ended dialogue acts. In our setting, two agents, each with a private list of items with attributes, must communicate to identify the unique shared item. Consider the dialogue in Figure 1, in which two people are trying to find their mutual friend. By asking “do you have anyone who went to columbia?”, B is suggesting that she has some Columbia friends, and that they probably work at Google. Such conversational implicature is lost when interpreting the utterance as simply an information request. In addition, it is hard to define a structured state that captures the diverse semantics in many utterances (e.g., defining “most of”, “might be”; see details in Table 1). To model both structured and open-ended context, we propose the Dynamic Knowledge Graph Network (DynoNet), in which the dialogue state is modeled as a knowledge graph with an embedding 1766 for each node (Section 3). Our model is similar to EntNet (Henaff et al., 2017) in that node/entity embeddings are updated recurrently given new utterances. The difference is that we structure entities as a knowledge graph; as the dialogue proceeds, new nodes are added and new context is propagated on the graph. An attention-based mechanism (Bahdanau et al., 2015) over the node embeddings drives generation of new utterances. Our model’s use of knowledge graphs captures the grounding capability of classic task-oriented systems and the graph embedding provides the representational flexibility of neural models. The naturalness of communication in the symmetric collaborative setting enables large-scale data collection: We were able to crowdsource around 11K human-human dialogues on Amazon Mechanical Turk (AMT) in less than 15 hours.1 We show that the new dataset calls for more flexible representations beyond fully-structured states (Section 2.2). In addition to conducting the third-party human evaluation adopted by most work (Liu et al., 2016; Li et al., 2016b,c), we also conduct partner evaluation (Wen et al., 2017) where AMT workers rate their conversational partners (other workers or our models) based on fluency, correctness, cooperation, and human-likeness. We compare DynoNet with baseline neural models and a strong rulebased system. The results show that DynoNet can perform the task with humans efficiently and naturally; it also captures some strategic aspects of human-human dialogues. The contributions of this work are: (i) a new symmetric collaborative dialogue setting and a large dialogue corpus that pushes the boundaries of existing dialogue systems; (ii) DynoNet, which integrates semantically rich utterances with structured knowledge to represent open-ended dialogue states; (iii) multiple automatic metrics based on bot-bot chat and a comparison of third-party and partner evaluation. 2 Symmetric Collaborative Dialogue We begin by introducing a collaborative task between two agents and describe the human-human dialogue collection process. We show that our data exhibits diverse, interesting language phenomena. 1The dataset is available publicly at https:// stanfordnlp.github.io/cocoa/. 2.1 Task Definition In the symmetric collaborative dialogue setting, there are two agents, A and B, each with a private knowledge base—KBA and KBB, respectively. Each knowledge base includes a list of items, where each item has a value for each attribute. For example, in the MutualFriends setting, Figure 1, items are friends and attributes are name, school, etc. There is a shared item that A and B both have; their goal is to converse with each other to determine the shared item and select it. Formally, an agent is a mapping from its private KB and the dialogue thus far (sequence of utterances) to the next utterance to generate or a selection. A dialogue is considered successful when both agents correctly select the shared item. This setting has parallels in human-computer collaboration where each agent has complementary expertise. 2.2 Data collection We created a schema with 7 attributes and approximately 3K entities (attribute values). To elicit linguistic and strategic variants, we generate a random scenario for each task by varying the number of items (5 to 12), the number attributes (3 or 4), and the distribution of values for each attribute (skewed to uniform). See Appendix A and B for details of schema and scenario generation. Figure 2: Screenshot of the chat interface. We crowdsourced dialogues on AMT by randomly pairing up workers to perform the task within 5 minutes.2 Our chat interface is shown in Figure 2. To discourage random guessing, we prevent workers from selecting more than once every 10 seconds. Our task was very popular and we col2If the workers exceed the time limit, the dialogue is marked as unsuccessful (but still logged). 1767 Type % Easy example Hard example Inform 30.4 I know a judy. / I have someone who studied the bible in the afternoon. About equal indoor and outdoor friends / me too. his major is forestry / might be kelly Ask 17.7 Do any of them like Poi? / What does your henry do? What can you tell me about our friend? / Or maybe north park college? Answer 7.4 None of mine did / Yup / They do. / Same here. yes 3 of them / No he likes poi / yes if boston college Table 1: Main utterance types and examples. We show both standard utterances whose meaning can be represented by simple logical forms (e.g., ask(indoor)), and open-ended ones which require more complex logical forms (difficult parts in bold). Text spans corresponding to entities are underlined. Phenomenon Example Coreference (I know one Debra) does she like the indoors? / (I have two friends named TIffany) at World airways? Coordination keep on going with the fashion / Ok. let’s try something else. / go by hobby / great. select him. thanks! Chit-chat Yes, that is good ole Terry. / All indoorsers! my friends hate nature Categorization same, most of mine are female too / Does any of them names start with B Correction I know one friend into Embroidery - her name is Emily. Sorry – Embroidery friend is named Michelle Table 2: Communication phenomena in the dataset. Evident parts is in bold and text spans corresponding to an entity are underlined. For coreference, the antecedent is in parentheses. lected 11K dialogues over a period of 13.5 hours.3 Of these, over 9K dialogues are successful. Unsuccessful dialogues are usually the result of either worker leaving the chat prematurely. 2.3 Dataset statistics We show the basic statistics of our dataset in Table 3. An utterance is defined as a message sent by one of the agents. The average utterance length is short due to the informality of the chat, however, an agent usually sends multiple utterances in one turn. Some example dialogues are shown in Table 6 and Appendix I. # dialogues 11157 # completed dialogues 9041 Vocabulary size 5325 Average # of utterances 11.41 Average time taken per task (sec.) 91.18 Average utterance length (tokens) 5.08 Number of linguistic templates4 41561 Table 3: Statistics of the MutualFriends dataset. We categorize utterances into coarse types— inform, ask, answer, greeting, apology—by pattern matching (Appendix E). There are 7.4% multitype utterances, and 30.9% utterances contain more than one entity. In Table 1, we show example utterances with rich semantics that cannot be sufficiently represented by traditional slot-values. 3Tasks are put up in batches; the total time excludes intervals between batches. 4Entity names are replaced by their entity types. Some of the standard ones are also non-trivial due to coreference and logical compositionality. Our dataset also exhibits some interesting communication phenomena. Coreference occurs frequently when people check multiple attributes of one item. Sometimes mentions are dropped, as an utterance simply continues from the partner’s utterance. People occasionally use external knowledge to group items with out-of-schema attributes (e.g., gender based on names, location based on schools). We summarize these phenomena in Table 2. In addition, we find 30% utterances involve cross-talk where the conversation does not progress linearly (e.g., italic utterances in Figure 1), a common characteristic of online chat (Ivanovic, 2005). One strategic aspect of this task is choosing the order of attributes to mention. We find that people tend to start from attributes with fewer unique values, e.g., “all my friends like morning” given the KBB in Table 6, as intuitively it would help exclude items quickly given fewer values to check.5 We provide a more detailed analysis of strategy in Section 4.2 and Appendix F. 3 Dynamic Knowledge Graph Network The diverse semantics in our data motivates us to combine unstructured representation of the dialogue history with structured knowledge. Our 5Our goal is to model human behavior thus we do not discuss the optimal strategy here. 1768 B: anyone went to columbia? columbia google KB + Dialogue history Dynamic knowledge graph Graph embedding Generator Name School Company Jessica Columbia Google Josh Columbia Google Item 1 Item 2 2 1 josh jessica S N C Message passing path of columbia anyone went columbia … … columbia google jessica josh … … Yes and josh jessica Attention + Copy Figure 3: Overview of our approach. First, the KB and dialogue history (entities in bold) is mapped to a graph. Here, an item node is labeled by the item ID and an attribute node is labeled by the attribute’s first letter. Next, each node is embedded using relevant utterance embeddings through message passing. Finally, an LSTM generates the next utterance based on attention over the node embeddings. model consists of three components shown in Figure 3: (i) a dynamic knowledge graph, which represents the agent’s private KB and shared dialogue history as a graph (Section 3.1), (ii) a graph embedding over the nodes (Section 3.2), and (iii) an utterance generator (Section 3.3). The knowledge graph represents entities and relations in the agent’s private KB, e.g., item-1’s company is google. As the conversation unfolds, utterances are embedded and incorporated into node embeddings of mentioned entities. For instance, in Figure 3, “anyone went to columbia” updates the embedding of columbia. Next, each node recursively passes its embedding to neighboring nodes so that related entities (e.g., those in the same row or column) also receive information from the most recent utterance. In our example, jessica and josh both receive new context when columbia is mentioned. Finally, the utterance generator, an LSTM, produces the next utterance by attending to the node embeddings. 3.1 Knowledge Graph Given a dialogue of T utterances, we construct graphs (Gt)T t=1 over the KB and dialogue history for agent A.6 There are three types of nodes: item nodes, attribute nodes, and entity nodes. Edges between nodes represent their relations. For example, (item-1, hasSchool, columbia) means that the first item has attribute school whose value 6 It is important to differentiate perspectives of the two agents as they have different KBs. Thereafter we assume the perspective of agent A, i.e., accessing KBA for A only, and refer to B as the partner. is columbia. An example graph is shown in Figure 3. The graph Gt is updated based on utterance t by taking Gt−1 and adding a new node for any entity mentioned in utterance t but not in KBA.7 3.2 Graph Embedding Given a knowledge graph, we are interested in computing a vector representation for each node v that captures both its unstructured context from the dialogue history and its structured context in the KB. A node embedding Vt(v) for each node v 2 Gt is built from three parts: structural properties of an entity defined by the KB, embeddings of utterances in the dialogue history, and message passing between neighboring nodes. Node Features. Simple structural properties of the KB often govern what is talked about; e.g., a high-frequency entity is usually interesting to mention (consider “All my friends like dancing.”). We represent this type of information as a feature vector Ft(v), which includes the degree and type (item, attribute, or entity type) of node v, and whether it has been mentioned in the current turn. Each feature is encoded as a one-hot vector and they are concatenated to form Ft(v). Mention Vectors. A mention vector Mt(v) contains unstructured context from utterances relevant to node v up to turn t. To compute it, we first define the utterance representation ˜ut and the set of relevant entities Et. Let ut be the embedding of utterance t (Section 3.3). To differentiate between 7 We use a rule-based lexicon to link text spans to entities. See details in Appendix D. 1769 the agent’s and the partner’s utterances, we represent it as ˜ut = ⇥ ut · {ut2Uself}, ut · {ut2Upartner} ⇤ , where Uself and Upartner denote sets of utterances generated by the agent and the partner, and [·, ·] denotes concatenation. Let Et be the set of entity nodes mentioned in utterance t if utterance t mentions some entities, or utterance t −1 otherwise.8 The mention vector Mt(v) of node v incorporates the current utterance if v is mentioned and inherits Mt−1(v) if not: Mt(v) = λtMt−1(v) + (1 −λt)˜ut; (1) λt = ( σ $ W inc [Mt−1(v), ˜ut] % if v 2 Et, 1 otherwise. Here, σ is the sigmoid function and W inc is a parameter matrix. Recursive Node Embeddings. We propagate information between nodes according to the structure of the knowledge graph. In Figure 3, given “anyone went to columbia?”, the agent should focus on her friends who went to Columbia University. Therefore, we want this utterance to be sent to item nodes connected to columbia, and one step further to other attributes of these items because they might be mentioned next as relevant information, e.g., jessica and josh. We compute the node embeddings recursively, analogous to belief propagation: V k t (v) = max v02Nt(v) tanh (2) ⇣ W mp h V k−1 t (v0), R(ev!v0) i⌘ , where V k t (v) is the depth-k node embedding at turn t and Nt(v) denotes the set of nodes adjacent to v. The message from a neighboring node v0 depends on its embedding at depth-(k −1), the edge label ev!v0 (embedded by a relation embedding function R), and a parameter matrix W mp. Messages from all neighbors are aggregated by max, the element-wise max operation.9 Example message passing paths are shown in Figure 3. The final node embedding is the concatenation of embeddings at each depth: Vt(v) = ⇥ V 0 t (v), . . . , V K t (v) ⇤ , (3) where K is a hyperparameter (we experiment with K 2 {0, 1, 2}) and V 0 t (v) = [Ft(v), Mt(v)]. 8 Relying on utterance t −1 is useful when utterance t answers a question, e.g., “do you have any google friends?” “No.” 9Using sum or mean slightly hurts performance. 3.3 Utterance Embedding and Generation We embed and generate utterances using Long Short Term Memory (LSTM) networks that take the graph embeddings into account. Embedding. On turn t, upon receiving an utterance consisting of nt tokens, xt = (xt,1, . . . , xt,nt), the LSTM maps it to a vector as follows: ht,j = LSTMenc(ht,j−1, At(xt,j)), (4) where ht,0 = ht−1,nt−1, and At is an entity abstraction function, explained below. The final hidden state ht,nt is used as the utterance embedding ut, which updates the mention vectors as described in Section 3.2. In our dialogue task, the identity of an entity is unimportant. For example, replacing google with alphabet in Figure 1 should make little difference to the conversation. The role of an entity is determined instead by its relation to other entities and relevant utterances. Therefore, we define the abstraction At(y) for a word y as follows: if y is linked to an entity v, then we represent an entity by its type (school, company etc.) embedding concatenated with its current node embedding: At(y) = [Etype(y), Vt(v)]. Note that Vt(v) is determined only by its structural features and its context. If y is a non-entity, then At(y) is the word embedding of y concatenated with a zero vector of the same dimensionality as Vt(v). This way, the representation of an entity only depends on its structural properties given by the KB and the dialogue context, which enables the model to generalize to unseen entities at test time. Generation. Now, assuming we have embedded utterance xt−1 into ht−1,nt−1 as described above, we use another LSTM to generate utterance xt. Formally, we carry over the last utterance embedding ht,0 = ht−1,nt−1 and define: ht,j = LSTMdec(ht,j−1, [At(xt,j), ct,j]), (5) where ct,j is a weighted sum of node embeddings in the current turn: ct,j = P v2Gt ↵t,j,vVt(v), where ↵t,j,v are the attention weights over the nodes. Intuitively, high weight should be given to relevant entity nodes as shown in Figure 3,. We compute the weights through standard attention mechanism (Bahdanau et al., 2015): ↵t,j = softmax(st,j), st,j,v = wattn · tanh $ W attn [ht,j−1, Vt(v)] % , 1770 where vector wattn and W attn are parameters. Finally, we define a distribution over both words in the vocabulary and nodes in Gt using the copying mechanism of Jia and Liang (2016): p(xt,j+1 = y | Gt, xt,j) / exp $ W vocabht,j + b % , p(xt,j+1 = r(v) | Gt, xt,j) / exp (st,j,v) , where y is a word in the vocabulary, W vocab and b are parameters, and r(v) is the realization of the entity represented by node v, e.g., google is realized to “Google” during copying.10 4 Experiments We compare our model with a rule-based system and a baseline neural model. Both automatic and human evaluations are conducted to test the models in terms of fluency, correctness, cooperation, and human-likeness. The results show that DynoNet is able to converse with humans in a coherent and strategic way. 4.1 Setup We randomly split the data into train, dev, and test sets (8:1:1). We use a one-layer LSTM with 100 hidden units and 100-dimensional word vectors for both the encoder and the decoder (Section 3.3). Each successful dialogue is turned into two examples, each from the perspective of one of the two agents. We maximize the log-likelihood of all utterances in the dialogues. The parameters are optimized by AdaGrad (Duchi et al., 2010) with an initial learning rate of 0.5. We trained for at least 10 epochs; after that, training stops if there is no improvement on the dev set for 5 epochs. By default, we perform K = 2 iterations of message passing to compute node embeddings (Section 3.2). For decoding, we sequentially sample from the output distribution with a softmax temperature of 0.5.11 Hyperparameters are tuned on the dev set. We compare DynoNet with its static cousion (StanoNet) and a rule-based system (Rule). StanoNet uses G0 throughout the dialogue, thus the dialogue history is completely contained in the LSTM states instead of being injected into the knowledge graph. Rule maintains weights for each entity and each item in the KB to decide 10 We realize an entity by sampling from the empirical distribution of its surface forms found in the training data. 11 Since selection is a common ‘utterance’ in our dataset and neural generation models are susceptible to overgenerating common sentences, we halve its probability during sampling. what to talk about and which item to select. It has a pattern-matching semantic parser, a rulebased policy, and a templated generator. See Appendix G for details. 4.2 Evaluation We test our systems in two interactive settings: bot-bot chat and bot-human chat. We perform both automatic evaluation and human evaluation. Automatic Evaluation. First, we compute the cross-entropy (`) of a model on test data. As shown in Table 4, DynoNet has the lowest test loss. Next, we have a model chat with itself on the scenarios from the test set.12 We evaluate the chats with respect to language variation, effectiveness, and strategy. For language variation, we report the average utterance length Lu and the unigram entropy H in Table 4. Compared to Rule, the neural models tend to generate shorter utterances (Li et al., 2016b; Serban et al., 2017b). However, they are more diverse; for example, questions are asked in multiple ways such as “Do you have ...”, “Any friends like ...”, “What about ...”. At the discourse level, we expect the distribution of a bot’s utterance types to match the distribution of human’s. We show percentages of each utterance type in Table 4. For Rule, the decision about which action to take is written in the rules, while StanoNet and DynoNet learned to behave in a more human-like way, frequently informing and asking questions. To measure effectiveness, we compute the overall success rate (C) and the success rate per turn (CT ) and per selection (CS). As shown in Table 4, humans are the best at this game, followed by Rule which is comparable to DynoNet. Next, we investigate the strategies leading to these results. An agent needs to decide which entity/attribute to check first to quickly reduce the search space. We hypothesize that humans tend to first focus on a majority entity and an attribute with fewer unique values (Section 2.3). For example, in the scenario in Table 6, time and location are likely to be mentioned first. We show the average frequency of first-mentioned entities (#Ent1) and the average number of unique values for first-mentioned attributes (|Attr1|) in Ta12 We limit the number of turns in bot-bot chat to be the maximum number of turns humans took in the test set (46 turns). 1771 System ` # Lu H C " CT " CS " Sel Inf Ask Ans Greet #Ent1 |Attr1| #Ent #Attr Human 5.10 4.57 .82 .07 .38 .21 .31 .17 .08 .08 .55 .35 6.1 2.6 Rule 7.61 3.37 .90 .05 .29 .18 .34 .23 .00 .12 .24 .61 9.9 3.0 StanoNet 2.20 4.01 4.05 .78 .04 .18 .19 .26 .12 .23 .09 .61 .19 7.1 2.9 DynoNet 2.13 3.37 3.90 .96 .06 .25 .22 .26 .13 .20 .12 .55 .18 5.2 2.5 Table 4: Automatic evaluation on human-human and bot-bot chats on test scenarios. We use " / # to indicate that higher / lower values are better; otherwise the objective is to match humans’ statistics. Best results (except Human) are in bold. Neural models generate shorter (lower Lu) but more diverse (higher H) utterances. Overall, their distributions of utterance types match those of the humans’. (We only show the most frequent speech acts therefore the numbers do not sum to 1.) Rule is effective in completing the task (higher CS), but it is not information-efficient given the large number of attributes (#Attr) and entities (#Ent) mentioned. ble 4.13 Both DynoNet and StanoNet successfully match human’s starting strategy by favoring entities of higher frequency and attributes of smaller domain size. To examine the overall strategy, we show the average number of attributes (#Attr) and entities (#Ent) mentioned during the conversation in Table 4. Humans and DynoNet strategically focus on a few attributes and entities, whereas Rule needs almost twice entities to achieve similar success rates. This suggests that the effectiveness of Rule mainly comes from large amounts of unselective information, which is consistent with comments from their human partners. Partner Evaluation. We generated 200 new scenarios and put up the bots on AMT using the same chat interface that was used for data collection. The bots follow simple turn-taking rules explained in Appendix H. Each AMT worker is randomly paired with Rule, StanoNet, DynoNet, or another human (but the worker doesn’t know which), and we make sure that all four types of agents are tested in each scenario at least once. At the end of each dialogue, humans are asked to rate their partner in terms of fluency, correctness, cooperation, and human-likeness from 1 (very bad) to 5 (very good), along with optional comments. We show the average ratings (with significance tests) in Table 5 and the histograms in Appendix J. In terms of fluency, the models have similar performance since the utterances are usually short. Judgment on correctness is a mere guess since the evaluator cannot see the partner’s KB; we will analyze correctness more meaningfully in the thirdparty evaluation below. 13 Both numbers are normalized to [0, 1] with respect to all entities/attributes in the corresponding KB. Noticeably, DynoNet is more cooperative than the other models. As shown in the example dialogues in Table 6, DynoNet cooperates smoothly with the human partner, e.g., replying with relevant information about morning/indoor friends when the partner mentioned that all her friends prefer morning and most like indoor. StanoNet starts well but doesn’t follow up on the morning friend, presumably because the morning node is not updated dynamically when mentioned by the partner. Rule follows the partner poorly. In the comments, the biggest complaint about Rule was that it was not ‘listening’ or ‘understanding’. Overall, DynoNet achieves better partner satisfaction, especially in cooperation. Third-party Evaluation. We also created a third-party evaluation task, where an independent AMT worker is shown a conversation and the KB of one of the agents; she is asked to rate the same aspects of the agent as in the partner evaluation and provide justifications. Each agent in a dialogue is rated by at least 5 people. The average ratings and histograms are shown in Table 5 and Appendix J. For correctness, we see that Rule has the best performance since it always tells the truth, whereas humans can make mistakes due to carelessness and the neural models can generate false information. For example, in Table 6, DynoNet ‘lied’ when saying that it has a morning friend who likes outdoor. Surprisingly, there is a discrepancy between the two evaluation modes in terms of cooperation and human-likeness. Manual analysis of the comments indicates that third-party evaluators focus less on the dialogue strategy and more on linguistic features, probably because they were not fully engaged in the dialogue. For example, justification 1772 System C CT CS Partner eval Third-party eval Flnt Crct Coop Human Flnt Crct Coop Human Human .89 .07 .36 4.2rds 4.3rds 4.2rds 4.1rds 4.0 4.3ds 4.0ds 4.1rds Rule .88 .06 .29 3.6 4.0 3.5 3.5 4.0 4.4hds 3.9s 4.0s StanoNet .76 .04 .23 3.5 3.8 3.4 3.3 4.0 4.0 3.8 3.8 DynoNet .87 .05 .27 3.8s 4.0 3.8rs 3.6s 4.0 4.1 3.9 3.9 Table 5: Results on human-bot/human chats. Best results (except Human) in each column are in bold. We report the average ratings of each system. For third-party evaluation, we first take mean of each question then average the ratings. DynoNet has the best partner satisfaction in terms of fluency (Flnt), correctness (Crct), cooperation (Coop), human likeness (Human). The superscript of a result indicates that its advantage over other systems (r: Rule, s: StanoNet, d: DynoNet) is statistically significant with p < 0.05 given by paired t-tests. for cooperation often mentions frequent questions and timely answers, less attention is paid to what is asked about though. For human-likeness, partner evaluation is largely correlated with coherence (e.g., not repeating or ignoring past information) and task success, whereas third-party evaluators often rely on informality (e.g., usage of colloquia like “hiya”, capitalization, and abbreviation) or intuition. Interestingly, third-party evaluators noted most phenomena listed in Table 2 as indicators of humanbeings, e.g., correcting oneself, making chit-chat other than simply finishing the task. See example comments in Appendix K. 4.3 Ablation Studies Our model has two novel designs: entity abstraction and message passing for node embeddings. Table 7 shows what happens if we ablate these. When the number of message passing iterations, K, is reduced from 2 to 0, the loss consistently increases. Removing entity abstraction—meaning adding entity embeddings to node embeddings and the LSTM input embeddings—also degrades performance. This shows that DynoNet benefits from contextually-defined, structural node embeddings rather than ones based on a classic lookup table. Model ` DynoNet (K = 2) 2.16 DynoNet (K = 1) 2.20 DynoNet (K = 0) 2.26 DynoNet (K = 2) w/o entity abstraction 2.21 Table 7: Ablations of our model on the dev set show the importance of entity abstraction and message passing (K = 2). 5 Discussion and Related Work There has been a recent surge of interest in end-to-end task-oriented dialogue systems, though progress has been limited by the size of available datasets (Serban et al., 2015a). Most work focuses on information-querying tasks, using Wizard-ofOz data collection (Williams et al., 2016; Asri et al., 2016) or simulators (Bordes and Weston, 2017; Li et al., 2016d), In contrast, collaborative dialogues are easy to collect as natural human conversations, and are also challenging enough given the large number of scenarios and diverse conversation phenomena. There are some interesting strategic dialogue datasets—settlers of Catan (Afantenos et al., 2012) (2K turns) and the cards corpus (Potts, 2012) (1.3K dialogues), as well as work on dialogue strategies (Keizer et al., 2017; Vogel et al., 2013), though no full dialogue system has been built for these datasets. Most task-oriented dialogue systems follow the POMDP-based approach (Williams and Young, 2007; Young et al., 2013). Despite their success (Wen et al., 2017; Dhingra et al., 2017; Su et al., 2016), the requirement for handcrafted slots limits their scalability to new domains and burdens data collection with extra state labeling. To go past this limit, Bordes and Weston (2017) proposed a Memory-Networks-based approach without domain-specific features. However, the memory is unstructured and interfacing with KBs relies on API calls, whereas our model embeds both the dialogue history and the KB structurally. Williams et al. (2017) use an LSTM to automatically infer the dialogue state, but as they focus on dialogue control rather than the full problem, the response is modeled as a templated action, which restricts the generation of richer utterances. Our network ar1773 Friends of A ID Name Company Time Location 1 Kathy TRT Holdings afternoon indoor 2 Jason Dollar General afternoon indoor 3 Johnny TRT Holdings afternoon outdoor 4 Frank SFN Group afternoon indoor 5 Catherine Dollar General afternoon indoor 6 Catherine Weis Markets afternoon indoor 7 Kathleen TRT Holdings morning indoor 8 Lori TRT Holdings afternoon indoor 9 Frank L&L Hawaiian Barbecue afternoon outdoor Friends of B ID Name Company Time Location 1 Justin New Era Tickets morning indoor 2 Kathleen TRT Holdings morning indoor 3 Gloria L&L Hawaiian Barbecue morning indoor 4 Kathleen Advance Auto Parts morning outdoor 5 Justin Arctic Cat morning indoor 6 Anna Dollar General morning indoor 7 Steven SFN Group morning indoor 8 Wayne R.J. Corman Railroad Group morning indoor 9 Alexander R.J. Corman Railroad Group morning indoor A: Human B: Human A: DynoNet B: Human A: Hi B: hey || i have one outdoor A: I have 4 TRT Holdings || I have 2 outdoor one Johnny, other Frank B: i only have one TRT Holdings - Kathleen A: SELECT 7 B: SELECT 2 A: hi B: Hello || all my friends prefer morning A: 1 of my morning likes the outdoors B: and all like indoor except for one A: do they work for trt holdings? B: Kathleen? A: SELECT 7 B: SELECT 2 A: StanoNet B: Human A: Human B: Rule A: Hello B: hi A: Hello || I have one morning person. B: all of my friends like mornings A: My friend prefers afternoon works at trt holdings. B: what is their name? A: Likes indoors. B: what is your fiend who likes morning name? A: They work for trt holdings. B: SELECT 2 A: SELECT 7 B: hiya A: hEY B: I have 1 indoors and kathleen. A: Most of mine are indoors. B: SELECT 1 A: I have one morning and rest afternoon. B: Do you have any friend working at l hawaiian? A: I don’t know Justin B: I have 1 alexander. ... Table 6: Examples of human-bot chats. The mutual friend is highlighted in blue in each KB. Bots’ utterances are in bold and selected items are represented by item IDs. Only the first half of the humanRule chat is shown due to limited space. Multiple utterances of one agent rae separated by ||. chitecture is most similar to EntNet (Henaff et al., 2017), where memories are also updated by input sentences recurrently. The main difference is that our model allows information to be propagated between structured entities, which is shown to be crucial in our setting (Section 4.3). Our work is also related to language generation conditioned on knowledge bases (Mei et al., 2016; Kiddon et al., 2016). One challenge here is to avoid generating false or contradicting statements, which is currently a weakness of neural models. Our model is mostly accurate when generating facts and answering existence questions about a single entity, but will need a more advanced attention mechanism for generating utterances involving multiple entities, e.g., attending to items or attributes first, then selecting entities; generating high-level concepts before composing them to natural tokens (Serban et al., 2017a). In conclusion, we believe the symmetric collaborative dialogue setting and our dataset provide unique opportunities at the interface of traditional task-oriented dialogue and open-domain chat. We also offered DynoNet as a promising means for open-ended dialogue state representation. Our dataset facilitates the study of pragmatics and human strategies in dialogue—a good stepping stone towards learning more complex dialogues such as negotiation. Acknowledgments. This work is supported by DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF15-1-0462. Mike Kayser worked on an early version of the project while he was at Stanford. We also thank members of the Stanford NLP group for insightful discussions. Reproducibility. All code, data, and experiments for this paper are available on the CodaLab platform: https: //worksheets.codalab.org/worksheets/ 0xc757f29f5c794e5eb7bfa8ca9c945573. 1774 References S. Afantenos, N. Asher, F. Benamara, A. Cadilhac, C. D´egremont, P. Denis, M. Guhe, S. Keizer, A. Lascarides, O. Lemon, P. Muller, S. Paul, V. Rieser, and L. Vieu. 2012. Developing a corpus of strategic conversation in the settlers of catan. In SeineDial 2012 The 16th Workshop on the Semantics and Pragmatics of Dialogue. L. E. Asri, H. Schulz, S. Sharma, J. Zumer, J. Harris, E. Fine, R. Mehrotra, and K. Suleman. 2016. Frames: A corpus for adding memory to goaloriented dialogue systems. Maluuba Technical Report . D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR). A. Bordes and J. Weston. 2017. Learning end-to-end goal-oriented dialog. In International Conference on Learning Representations (ICLR). B. Dhingra, L. Li, X. Li, J. Gao, Y. Chen, F. Ahmed, and L. Deng. 2017. End-to-end reinforcement learning of dialogue agents for information access. In Association for Computational Linguistics (ACL). J. Duchi, E. Hazan, and Y. Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. In Conference on Learning Theory (COLT). M. Henaff, J. Weston, A. Szlam, A. Bordes, and Y. LeCun. 2017. Tracking the world state with recurrent entity networks. In International Conference on Learning Representations (ICLR). E. Ivanovic. 2005. Dialogue act tagging for instant messaging chat sessions. In Association for Computational Linguistics (ACL). R. Jia and P. Liang. 2016. Data recombination for neural semantic parsing. In Association for Computational Linguistics (ACL). S. Keizer, M. Guhe, H. Cuayahuitl, I. Efstathiou, K. Engelbrecht, M. Dobre, A. Lascarides, and O. Lemon. 2017. Evaluating persuasion strategies and deep reinforcement learning methods for negotiation dialogue agents. In European Association for Computational Linguistics (EACL). C. Kiddon, L. S. Zettlemoyer, and Y. Choi. 2016. Globally coherent text generation with neural checklist models. In Empirical Methods in Natural Language Processing (EMNLP). J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. 2016a. A persona-based neural conversation model. In Association for Computational Linguistics (ACL). J. Li, M. Galley, C. Brockett, J. Gao, and W. B. Dolan. 2016b. A diversity-promoting objective function for neural conversation models. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL). J. Li, W. Monroe, A. Ritter, D. Jurafsky, M. Galley, and J. Gao. 2016c. Deep reinforcement learning for dialogue generation. In Empirical Methods in Natural Language Processing (EMNLP). X. Li, Z. C. Lipton, B. Dhingra, L. Li, J. Gao, and Y. Chen. 2016d. A user simulator for taskcompletion dialogues. arXiv . C. Liu, R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Empirical Methods in Natural Language Processing (EMNLP). R. T. Lowe, N. Pow, I. Serban, L. Charlin, C. Liu, and J. Pineau. 2017. Training End-to-End dialogue systems with the ubuntu dialogue corpus. Dialogue and Discourse 8. H. Mei, M. Bansal, and M. R. Walter. 2016. What to talk about and how? selective generation using LSTMs with coarse-to-fine alignment. In Human Language Technology and North American Association for Computational Linguistics (HLT/NAACL). H. Mei, M. Bansal, and M. R. Walter. 2017. Coherent dialogue with attention-based language models. In Association for the Advancement of Artificial Intelligence (AAAI). C. Potts. 2012. Goal-driven answers in the Cards dialogue corpus. In Proceedings of the 30th West Coast Conference on Formal Linguistics. I. Serban, T. Klinger, G. Tesauro, K. Talamadupula, B. Zhou, Y. Bengio, and A. C. Courville. 2017a. Multiresolution recurrent neural networks: An application to dialogue response generation. In Association for the Advancement of Artificial Intelligence (AAAI). I. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. C. Courville, and Y. Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating dialogues. In Association for the Advancement of Artificial Intelligence (AAAI). I. V. Serban, R. Lowe, L. Charlin, and J. Pineau. 2015a. A survey of available corpora for building data-driven dialogue systems. arXiv preprint arXiv:1512.05742 . I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. 2015b. Building end-to-end dialogue systems using generative hierarchical neural network models. arXiv preprint arXiv:1507.04808 . 1775 L. Shang, Z. Lu, and H. Li. 2015. Neural responding machine for short-text conversation. In Association for Computational Linguistics (ACL). A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J. Nie, J. Gao, and B. Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In North American Association for Computational Linguistics (NAACL). P. Su, M. Gasic, N. Mrksic, L. M. Rojas-Barahona, S. Ultes, D. Vandyke, T. Wen, and S. J. Young. 2016. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689 . A. Vogel, M. Bodoia, C. Potts, and D. Jurafsky. 2013. Emergence of gricean maxims from multi-agent decision theory. In North American Association for Computational Linguistics (NAACL). pages 1072– 1081. T. Wen, M. Gasic, N. Mrksic, L. M. Rojas-Barahona, P. Su, S. Ultes, D. Vandyke, and S. Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In European Association for Computational Linguistics (EACL). J. D. Williams, K. Asadi, and G. Zweig. 2017. Hybrid code networks: Practical and efficient end-toend dialog control with supervised and reinforcement learning. In Association for Computational Linguistics (ACL). J. D. Williams, A. Raux, and M. Henderson. 2016. The dialog state tracking challenge series: A review. Dialogue and Discourse 7. J. D. Williams and S. Young. 2007. Partially observable Markov decision processes for spoken dialog systems. Computer Speech & Language 21(2):393– 422. S. Young, M. Gasic, B. Thomson, and J. D. Williams. 2013. POMDP-based statistical spoken dialog systems: A review. Proceedings of the IEEE 101(5):1160–1179. 1776
2017
162
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1777–1788 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1163 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1777–1788 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1163 Neural Belief Tracker: Data-Driven Dialogue State Tracking Nikola Mrkˇsi´c1, Diarmuid ´O S´eaghdha2 Tsung-Hsien Wen1, Blaise Thomson2, Steve Young1 1 University of Cambridge 2 Apple Inc. {nm480, thw28, sjy}@cam.ac.uk {doseaghdha, blaisethom}@apple.com Abstract One of the core components of modern spoken dialogue systems is the belief tracker, which estimates the user’s goal at every step of the dialogue. However, most current approaches have difficulty scaling to larger, more complex dialogue domains. This is due to their dependency on either: a) Spoken Language Understanding models that require large amounts of annotated training data; or b) hand-crafted lexicons for capturing some of the linguistic variation in users’ language. We propose a novel Neural Belief Tracking (NBT) framework which overcomes these problems by building on recent advances in representation learning. NBT models reason over pre-trained word vectors, learning to compose them into distributed representations of user utterances and dialogue context. Our evaluation on two datasets shows that this approach surpasses past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided. 1 Introduction Spoken dialogue systems (SDS) allow users to interact with computer applications through conversation. Task-based systems help users achieve goals such as finding restaurants or booking flights. The dialogue state tracking (DST) component of an SDS serves to interpret user input and update the belief state, which is the system’s internal representation of the state of the conversation (Young et al., 2010). This is a probability distribution over dialogue states used by the downstream dialogue manager to decide which action the system should User: I’m looking for a cheaper restaurant inform(price=cheap) System: Sure. What kind - and where? User: Thai food, somewhere downtown inform(price=cheap, food=Thai, area=centre) System: The House serves cheap Thai food User: Where is it? inform(price=cheap, food=Thai, area=centre); request(address) System: The House is at 106 Regent Street Figure 1: Annotated dialogue states in a sample dialogue. Underlined words show rephrasings which are typically handled using semantic dictionaries. perform next (Su et al., 2016a,b); the system action is then verbalised by the natural language generator (Wen et al., 2015a,b; Duˇsek and Jurˇc´ıˇcek, 2015). The Dialogue State Tracking Challenge (DSTC) series of shared tasks has provided a common evaluation framework accompanied by labelled datasets (Williams et al., 2016). In this framework, the dialogue system is supported by a domain ontology which describes the range of user intents the system can process. The ontology defines a collection of slots and the values that each slot can take. The system must track the search constraints expressed by users (goals or informable slots) and questions the users ask about search results (requests), taking into account each user utterance (input via a speech recogniser) and the dialogue context (e.g., what the system just said). The example in Figure 1 shows the true state after each user utterance in a three-turn conversation. As can be seen in this example, DST models depend on identifying mentions of ontology items in user utterances. This becomes a non-trivial task when confronted with lexical variation, the dynamics of context and noisy automated speech recognition (ASR) output. 1777 FOOD=CHEAP: [affordable, budget, low-cost, low-priced, inexpensive, cheaper, economic, ...] RATING=HIGH: [best, high-rated, highly rated, top-rated, cool, chic, popular, trendy, ...] AREA=CENTRE: [center, downtown, central, city centre, midtown, town centre, ...] Figure 2: An example semantic dictionary with rephrasings for three ontology values in a restaurant search domain. Traditional statistical approaches use separate Spoken Language Understanding (SLU) modules to address lexical variability within a single dialogue turn. However, training such models requires substantial amounts of domain-specific annotation. Alternatively, turn-level SLU and cross-turn DST can be coalesced into a single model to achieve superior belief tracking performance, as shown by Henderson et al. (2014d). Such coupled models typically rely on manually constructed semantic dictionaries to identify alternative mentions of ontology items that vary lexically or morphologically. Figure 2 gives an example of such a dictionary for three slot-value pairs. This approach, which we term delexicalisation, is clearly not scalable to larger, more complex dialogue domains. Importantly, the focus on English in DST research understates the considerable challenges that morphology poses to systems based on exact matching in morphologically richer languages such as Italian or German (see Vuli´c et al. (2017)). In this paper, we present two new models, collectively called the Neural Belief Tracker (NBT) family. The proposed models couple SLU and DST, efficiently learning to handle variation without requiring any hand-crafted resources. To do that, NBT models move away from exact matching and instead reason entirely over pre-trained word vectors. The vectors making up the user utterance and preceding system output are first composed into intermediate representations. These representations are then used to decide which of the ontologydefined intents have been expressed by the user up to that point in the conversation. To the best of our knowledge, NBT models are the first to successfully use pre-trained word vector spaces to improve the language understanding capability of belief tracking models. In evaluation on two datasets, we show that: a) NBT models match the performance of delexicalisation-based models which make use of hand-crafted semantic lexicons; and b) the NBT models significantly outperform those models when such resources are not available. Consequently, we believe this work proposes a framework better-suited to scaling belief tracking models for deployment in real-world dialogue systems operating over sophisticated application domains where the creation of such domain-specific lexicons would be infeasible. 2 Background Models for probabilistic dialogue state tracking, or belief tracking, were introduced as components of spoken dialogue systems in order to better handle noisy speech recognition and other sources of uncertainty in understanding a user’s goals (Bohus and Rudnicky, 2006; Williams and Young, 2007; Young et al., 2010). Modern dialogue management policies can learn to use a tracker’s distribution over intents to decide whether to execute an action or request clarification from the user. As mentioned above, the DSTC shared tasks have spurred research on this problem and established a standard evaluation paradigm (Williams et al., 2013; Henderson et al., 2014b,a). In this setting, the task is defined by an ontology that enumerates the goals a user can specify and the attributes of entities that the user can request information about. Many different belief tracking models have been proposed in the literature, from generative (Thomson and Young, 2010) and discriminative (Henderson et al., 2014d) statistical models to rule-based systems (Wang and Lemon, 2013). To motivate the work presented here, we categorise prior research according to their reliance (or otherwise) on a separate SLU module for interpreting user utterances:1 Separate SLU Traditional SDS pipelines use Spoken Language Understanding (SLU) decoders to detect slot-value pairs expressed in the Automatic Speech Recognition (ASR) output. The downstream DST model then combines this information with the past dialogue context to update the belief state (Thomson and Young, 2010; Wang and Lemon, 2013; Lee and Kim, 2016; Perez, 2016; Perez and Liu, 2017; Sun et al., 2016; Jang et al., 2016; Shi et al., 2016; Dernoncourt et al., 2016; Liu and Perez, 2017; Vodol´an et al., 2017). 1The best-performing models in DSTC2 all used both raw ASR output and the output of (potentially more than one) SLU decoders (Williams, 2014; Williams et al., 2016). This does not mean that those models are immune to the drawbacks identified here for the two model categories; in fact, they share the drawbacks of both. 1778 Figure 3: Architecture of the NBT Model. The implementation of the three representation learning subcomponents can be modified, as long as these produce adequate vector representations which the downstream model components can use to decide whether the current candidate slot-value pair was expressed in the user utterance (taking into account the preceding system act). In the DSTC challenges, some systems used the output of template-based matching systems such as Phoenix (Wang, 1994). However, more robust and accurate statistical SLU systems are available. Many discriminative approaches to spoken dialogue SLU train independent binary models that decide whether each slot-value pair was expressed in the user utterance. Given enough data, these models can learn which lexical features are good indicators for a given value and can capture elements of paraphrasing (Mairesse et al., 2009). This line of work later shifted focus to robust handling of rich ASR output (Henderson et al., 2012; Tur et al., 2013). SLU has also been treated as a sequence labelling problem, where each word in an utterance is labelled according to its role in the user’s intent; standard labelling models such as CRFs or Recurrent Neural Networks can then be used (Raymond and Ricardi, 2007; Yao et al., 2014; Celikyilmaz and Hakkani-Tur, 2015; Mesnil et al., 2015; Peng et al., 2015; Zhang and Wang, 2016; Liu and Lane, 2016b; Vu et al., 2016; Liu and Lane, 2016a, i.a.). Other approaches adopt a more complex modelling structure inspired by semantic parsing (Saleh et al., 2014; Vlachos and Clark, 2014). One drawback shared by these methods is their resource requirements, either because they need to learn independent parameters for each slot and value or because they need fine-grained manual annotation at the word level. This hinders scaling to larger, more realistic application domains. Joint SLU/DST Research on belief tracking has found it advantageous to reason about SLU and DST jointly, taking ASR predictions as input and generating belief states as output (Henderson et al., 2014d; Sun et al., 2014; Zilka and Jurcicek, 2015; Mrkˇsi´c et al., 2015). In DSTC2, systems which used no external SLU module outperformed all systems that only used external SLU features. Joint models typically rely on a strategy known as delexicalisation whereby slots and values mentioned in the text are replaced with generic labels. Once the dataset is transformed in this manner, one can extract a collection of template-like n-gram features such as [want tagged-value food]. To perform belief tracking, the shared model iterates over all slot-value pairs, extracting delexicalised feature vectors and making a separate binary decision regarding each pair. Delexicalisation introduces a hidden dependency that is rarely discussed: how do we identify slot/value mentions in text? For toy domains, one can manually construct semantic dictionaries which list the potential rephrasings for all slot values. As shown by Mrkˇsi´c et al. (2016), the use of such dictionaries is essential for the performance of current delexicalisation-based models. Again though, this will not scale to the rich variety of user language or to general domains. The primary motivation for the work presented in this paper is to overcome the limitations that affect previous belief tracking models. The NBT model efficiently learns from the avail1779 able data by: 1) leveraging semantic information from pre-trained word vectors to resolve lexical/morphological ambiguity; 2) maximising the number of parameters shared across ontology values; and 3) having the flexibility to learn domainspecific paraphrasings and other kinds of variation that make it infeasible to rely on exact matching and delexicalisation as a robust strategy. 3 Neural Belief Tracker The Neural Belief Tracker (NBT) is a model designed to detect the slot-value pairs that make up the user’s goal at a given turn during the flow of dialogue. Its input consists of the system dialogue acts preceding the user input, the user utterance itself, and a single candidate slot-value pair that it needs to make a decision about. For instance, the model might have to decide whether the goal FOOD=ITALIAN has been expressed in ‘I’m looking for good pizza’. To perform belief tracking, the NBT model iterates over all candidate slot-value pairs (defined by the ontology), and decides which ones have just been expressed by the user. Figure 3 presents the flow of information in the model. The first layer in the NBT hierarchy performs representation learning given the three model inputs, producing vector representations for the user utterance (r), the current candidate slot-value pair (c) and the system dialogue acts (tq, ts, tv). Subsequently, the learned vector representations interact through the context modelling and semantic decoding submodules to obtain the intermediate interaction summary vectors dr, dc and d. These are used as input to the final decision-making module which decides whether the user expressed the intent represented by the candidate slot-value pair. 3.1 Representation Learning For any given user utterance, system act(s) and candidate slot-value pair, the representation learning submodules produce vector representations which act as input for the downstream components of the model. All representation learning subcomponents make use of pre-trained collections of word vectors. As shown by Mrkˇsi´c et al. (2016), specialising word vectors to express semantic similarity rather than relatedness is essential for improving belief tracking performance. For this reason, we use the semantically-specialised Paragram-SL999 word vectors (Wieting et al., 2015) throughout this work. The NBT training procedure keeps these vectors fixed: that way, at test time, unseen words semantically related to familiar slot values (i.e. inexpensive to cheap) will be recognised purely by their position in the original vector space (see also Rockt¨aschel et al. (2016)). This means that the NBT model parameters can be shared across all values of the given slot, or even across all slots. Let u represent a user utterance consisting of ku words u1, u2, . . . , uku. Each word has an associated word vector u1, . . . , uku. We propose two model variants which differ in the method used to produce vector representations of u: NBT-DNN and NBT-CNN. Both act over the constituent ngrams of the utterance. Let vn i be the concatenation of the n word vectors starting at index i, so that: vn i = ui ⊕. . . ⊕ui+n−1 (1) where ⊕denotes vector concatenation. The simpler of our two models, which we term NBT-DNN, is shown in Figure 4. This model computes cumulative n-gram representation vectors r1, r2 and r3, which are the n-gram ‘summaries’ of the unigrams, bigrams and trigrams in the user utterance: rn = ku−n+1 X i=1 vn i (2) Each of these vectors is then non-linearly mapped to intermediate representations of the same size: r′ n = σ(W s nrn + bs n) (3) where the weight matrices and bias terms map the cumulative n-grams to vectors of the same dimensionality and σ denotes the sigmoid activation function. We maintain a separate set of parameters for each slot (indicated by superscript s). The three vectors are then summed to obtain a single representation for the user utterance: r = r′ 1 + r′ 2 + r′ 3 (4) The cumulative n-gram representations used by this model are just unweighted sums of all word vectors in the utterance. Ideally, the model should learn to recognise which parts of the utterance are more relevant for the subsequent classification task. For instance, it could learn to ignore verbs or stop words and pay more attention to adjectives and nouns which are more likely to express slot values. 1780 Figure 4: NBT-DNN MODEL. Word vectors of n-grams (n = 1, 2, 3) are summed to obtain cumulative n-grams, then passed through another hidden layer and summed to obtain the utterance representation r. Figure 5: NBT-CNN Model. L convolutional filters of window sizes 1, 2, 3 are applied to word vectors of the given utterance (L = 3 in the diagram, but L = 300 in the system). The convolutions are followed by the ReLU activation function and max-pooling to produce summary n-gram representations. These are summed to obtain the utterance representation r. NBT-CNN Our second model draws inspiration from successful applications of Convolutional Neural Networks (CNNs) for language understanding (Collobert et al., 2011; Kalchbrenner et al., 2014; Kim, 2014). These models typically apply a number of convolutional filters to n-grams in the input sentence, followed by non-linear activation functions and max-pooling. Following this approach, the NBT-CNN model applies L = 300 different filters for n-gram lengths of 1, 2 and 3 (Figure 5). Let F s n ∈RL×nD denote the collection of filters for each value of n, where D = 300 is the word vector dimensionality. If vn i denotes the concatenation of n word vectors starting at index i, let mn = [vn 1; vn 2; . . . ; vn ku−n+1] be the list of n-grams that convolutional filters of length n run over. The three intermediate representations are then given by: Rn = F s n mn (5) Each column of the intermediate matrices Rn is produced by a single convolutional filter of length n. We obtain summary n-gram representations by pushing these representations through a rectified linear unit (ReLU) activation function (Nair and Hinton, 2010) and max-pooling over time (i.e. columns of the matrix) to get a single feature for each of the L filters applied to the utterance: r′ n = maxpool (ReLU (Rn + bs n)) (6) where bs n is a bias term broadcast across all filters. Finally, the three summary n-gram representations are summed to obtain the final utterance representation vector r (as in Equation 4). The NBT-CNN model is (by design) better suited to longer utterances, as its convolutional filters interact directly with subsequences of the utterance, and not just their noisy summaries given by the NBT-DNN’s cumulative n-grams. 3.2 Semantic Decoding The NBT diagram in Figure 3 shows that the utterance representation r and the candidate slotvalue pair representation c directly interact through the semantic decoding module. This component decides whether the user explicitly expressed an intent matching the current candidate pair 1781 (i.e. without taking the dialogue context into account). Examples of such matches would be ‘I want Thai food’ with food=Thai or more demanding ones such as ‘a pricey restaurant’ with price=expensive. This is where the use of high-quality pre-trained word vectors comes into play: a delexicalisation-based model could deal with the former example but would be helpless in the latter case, unless a human expert had provided a semantic dictionary listing all potential rephrasings for each value in the domain ontology. Let the vector space representations of a candidate pair’s slot name and value be given by cs and cv (with vectors of multi-word slot names/values summed together). The NBT model learns to map this tuple into a single vector c of the same dimensionality as the utterance representation r. These two representations are then forced to interact in order to learn a similarity metric which discriminates between interactions of utterances with slot-value pairs that they either do or do not express: c = σ W s c (cs + cv) + bs c  (7) d = r ⊗c (8) where ⊗denotes element-wise vector multiplication. The dot product, which may seem like the more intuitive similarity metric, would reduce the rich set of features in d to a single scalar. The element-wise multiplication allows the downstream network to make better use of its parameters by learning non-linear interactions between sets of features in r and c.2 3.3 Context Modelling This ‘decoder’ does not yet suffice to extract intents from utterances in human-machine dialogue. To understand some queries, the belief tracker must be aware of context, i.e. the flow of dialogue leading up to the latest user utterance. While all previous system and user utterances are important, the most relevant one is the last system utterance, in which the dialogue system could have performed (among others) one of the following two system acts: 1. System Request: The system asks the user about the value of a specific slot Tq. If the system utterance is: ‘what price range would 2We also tried to concatenate r and c and pass that vector to the downstream decision-making neural network. However, this set-up led to very weak performance since our relatively small datasets did not suffice for the network to learn to model the interaction between the two feature vectors. you like?’ and the user answers with any, the model must infer the reference to price range, and not to other slots such as area or food. 2. System Confirm: The system asks the user to confirm whether a specific slot-value pair (Ts, Tv) is part of their desired constraints. For example, if the user responds to ‘how about Turkish food?’ with ‘yes’, the model must be aware of the system act in order to correctly update the belief state. If we make the Markovian decision to only consider the last set of system acts, we can incorporate context modelling into the NBT. Let tq and (ts, tv) be the word vectors of the arguments for the system request and confirm acts (zero vectors if none). The model computes the following measures of similarity between the system acts, candidate pair (cs, cv) and utterance representation r: mr = (cs · tq)r (9) mc = (cs · ts)(cv · tv)r (10) where · denotes dot product. The computed similarity terms act as gating mechanisms which only pass the utterance representation through if the system asked about the current candidate slot or slot-value pair. This type of interaction is particularly useful for the confirm system act: if the system asks the user to confirm, the user is likely not to mention any slot values, but to just respond affirmatively or negatively. This means that the model must consider the three-way interaction between the utterance, candidate slot-value pair and the slot value pair offered by the system. If (and only if) the latter two are the same should the model consider the affirmative or negative polarity of the user utterance when making the subsequent binary decision. Binary Decision Maker The intermediate representations are passed through another hidden layer and then combined. If φdim(x) = σ(Wx + b) is a layer which maps input vector x to a vector of size dim, the input to the final binary softmax (which represents the decision) is given by: y = φ2 φ100(d) + φ100(mr) + φ100(mc)  4 Belief State Update Mechanism In spoken dialogue systems, belief tracking models operate over the output of automatic speech recognition (ASR). Despite improvements to speech 1782 recognition, the need to make the most out of imperfect ASR will persist as dialogue systems are used in increasingly noisy environments. In this work, we define a simple rule-based belief state update mechanism which can be applied to ASR N-best lists. For dialogue turn t, let syst−1 denote the preceding system output, and let ht denote the list of N ASR hypotheses ht i with posterior probabilities pt i. For any hypothesis ht i, slot s and slot value v ∈Vs, NBT models estimate P(s, v | ht i, syst−1), which is the (turn-level) probability that (s, v) was expressed in the given hypothesis. The predictions for N such hypotheses are then combined as: P(s, v | ht, syst−1) = N X i=1 pt i P s, v | ht i, syst This turn-level belief state estimate is then combined with the (cumulative) belief state up to time (t −1) to get the updated belief state estimate: P(s, v | h1:t, sys1:t−1) = λ P s, v | ht, syst−1 + (1 −λ) P s, v | h1:t−1, sys1:t−2 where λ is the coefficient which determines the relative weight of the turn-level and previous turns’ belief state estimates.3 For slot s, the set of its detected values at turn t is then given by: V t s = {v ∈Vs | P s, v | h1:t, sys1:t−1 ≥0.5} For informable (i.e. goal-tracking) slots, the value in V t s with the highest probability is chosen as the current goal (if V t s ̸= {∅}). For requests, all slots in V t req are deemed to have been requested. As requestable slots serve to model single-turn user queries, they require no belief tracking across turns. 5 Experiments 5.1 Datasets Two datasets were used for training and evaluation. Both consist of user conversations with taskoriented dialogue systems designed to help users find suitable restaurants around Cambridge, UK. The two corpora share the same domain ontology, which contains three informable (i.e. goal-tracking) slots: FOOD, AREA and PRICE. The users can specify values for these slots in order to find restaurants 3This coefficient was tuned on the DSTC2 development set. The best performance was achieved with λ = 0.55. which best meet their criteria. Once the system suggests a restaurant, the users can ask about the values of up to eight requestable slots (PHONE NUMBER, ADDRESS, etc.). The two datasets are: 1. DSTC2: We use the transcriptions, ASR hypotheses and turn-level semantic labels provided for the Dialogue State Tracking Challenge 2 (Henderson et al., 2014a). The official transcriptions contain various spelling errors which we corrected manually; the cleaned version of the dataset is available at mi.eng.cam.ac.uk/˜nm480/ dstc2-clean.zip. The training data contains 2207 dialogues and the test set consists of 1117 dialogues. We train NBT models on transcriptions but report belief tracking performance on test set ASR hypotheses provided in the original challenge. 2. WOZ 2.0: Wen et al. (2017) performed a Wizard of Oz style experiment in which Amazon Mechanical Turk users assumed the role of the system or the user of a task-oriented dialogue system based on the DSTC2 ontology. Users typed instead of using speech, which means performance in the WOZ experiments is more indicative of the model’s capacity for semantic understanding than its robustness to ASR errors. Whereas in the DSTC2 dialogues users would quickly adapt to the system’s (lack of) language understanding capability, the WOZ experimental design gave them freedom to use more sophisticated language. We expanded the original WOZ dataset from Wen et al. (2017) using the same data collection procedure, yielding a total of 1200 dialogues. We divided these into 600 training, 200 validation and 400 test set dialogues. The WOZ 2.0 dataset is available at mi.eng.cam.ac. uk/˜nm480/woz_2.0.zip. Training Examples The two corpora are used to create training data for two separate experiments. For each dataset, we iterate over all train set utterances, generating one example for each of the slotvalue pairs in the ontology. An example consists of a transcription, its context (i.e. list of preceding system acts) and a candidate slot-value pair. The binary label for each example indicates whether or not its utterance and context express the example’s candidate pair. For instance, ‘I would like Irish 1783 food’ would generate a positive example for candidate pair FOOD=IRISH, and a negative example for every other slot-value pair in the ontology. Evaluation We focus on two key evaluation metrics introduced in (Henderson et al., 2014a): 1. Goals (‘joint goal accuracy’): the proportion of dialogue turns where all the user’s search goal constraints were correctly identified; 2. Requests: similarly, the proportion of dialogue turns where user’s requests for information were identified correctly. 5.2 Models We evaluate two NBT model variants: NBT-DNN and NBT-CNN. To train the models, we use the Adam optimizer (Kingma and Ba, 2015) with crossentropy loss, backpropagating through all the NBT subcomponents while keeping the pre-trained word vectors fixed (in order to allow the model to deal with unseen words at test time). The model is trained separately for each slot. Due to the high class bias (most of the constructed examples are negative), we incorporate a fixed number of positive examples in each mini-batch.4 Baseline Models For each of the two datasets, we compare the NBT models to: 1. A baseline system that implements a wellknown competitive delexicalisation-based model for that dataset. For DSTC2, the model is that of Henderson et al. (2014c; 2014d). This model is an n-gram based neural network model with recurrent connections between turns (but not inside utterances) which replaces occurrences of slot names and values with generic delexicalised features. For WOZ 2.0, we compare the NBT models to a more sophisticated belief tracking model presented in (Wen et al., 2017). This model uses an RNN for belief state updates and a CNN for turn-level feature extraction. Unlike NBTCNN, their CNN operates not over vectors, 4Model hyperparameters were tuned on the respective validation sets. For both datasets, the initial Adam learning rate was set to 0.001, and 1 8th of positive examples were included in each mini-batch. The batch size did not affect performance: it was set to 256 in all experiments. Gradient clipping (to [−2.0, 2.0]) was used to handle exploding gradients. Dropout (Srivastava et al., 2014) was used for regularisation (with 50% dropout rate on all intermediate representations). Both NBT models were implemented in TensorFlow (Abadi et al., 2015). but over delexicalised features akin to those used by Henderson et al. (2014c). 2. The same baseline model supplemented with a task-specific semantic dictionary (produced by the baseline system creators). The two dictionaries are available at mi.eng.cam. ac.uk/˜nm480/sem-dict.zip. The DSTC2 dictionary contains only three rephrasings. Nonetheless, the use of these rephrasings translates to substantial gains in DST performance (see Sect. 6.1). We believe this result supports our claim that the vocabulary used by Mechanical Turkers in DSTC2 was constrained by the system’s inability to cope with lexical variation and ASR noise. The WOZ dictionary includes 38 rephrasings, showing that the unconstrained language used by Mechanical Turkers in the Wizard-of-Oz setup requires more elaborate lexicons. Both baseline models map exact matches of ontology-defined intents (and their lexiconspecified rephrasings) to one-hot delexicalised ngram features. This means that pre-trained vectors cannot be incorporated directly into these models. 6 Results 6.1 Belief Tracking Performance Table 1 shows the performance of NBT models trained and evaluated on DSTC2 and WOZ 2.0 datasets. The NBT models outperformed the baseline models in terms of both joint goal and request accuracies. For goals, the gains are always statistically significant (paired t-test, p < 0.05). Moreover, there was no statistically significant variation between the NBT and the lexicon-supplemented models, showing that the NBT can handle semantic relations which otherwise had to be explicitly encoded in semantic dictionaries. While the NBT performs well across the board, we can compare its performance on the two datasets to understand its strengths. The improvement over the baseline is greater on WOZ 2.0, which corroborates our intuition that the NBT’s ability to learn linguistic variation is vital for this dataset containing longer sentences, richer vocabulary and no ASR errors. By comparison, the language of the subjects in the DSTC2 dataset is less rich, and compensating for ASR errors is the main hurdle: given access to the DSTC2 test set transcriptions, the NBT models’ goal accuracy rises to 0.96. This 1784 DST Model DSTC2 WOZ 2.0 Goals Requests Goals Requests Delexicalisation-Based Model 69.1 95.7 70.8 87.1 Delexicalisation-Based Model + Semantic Dictionary 72.9* 95.7 83.7* 87.6 NEURAL BELIEF TRACKER: NBT-DNN 72.6* 96.4 84.4* 91.2* NEURAL BELIEF TRACKER: NBT-CNN 73.4* 96.5 84.2* 91.6* Table 1: DSTC2 and WOZ 2.0 test set accuracies for: a) joint goals; and b) turn-level requests. The asterisk indicates statistically significant improvement over the baseline trackers (paired t-test; p < 0.05). indicates that future work should focus on better ASR compensation if the model is to be deployed in environments with challenging acoustics. 6.2 The Importance of Word Vector Spaces The NBT models use the semantic relations embedded in the pre-trained word vectors to handle semantic variation and produce high-quality intermediate representations. Table 2 shows the performance of NBT-CNN5 models making use of three different word vector collections: 1) ‘random’ word vectors initialised using the XAVIER initialisation (Glorot and Bengio, 2010); 2) distributional GloVe vectors (Pennington et al., 2014), trained using co-occurrence information in large textual corpora; and 3) semantically specialised ParagramSL999 vectors (Wieting et al., 2015), which are obtained by injecting semantic similarity constraints from the Paraphrase Database (Ganitkevitch et al., 2013) into the distributional GloVe vectors in order to improve their semantic content. The results in Table 2 show that the use of semantically specialised word vectors leads to considerable performance gains: Paragram-SL999 vectors (significantly) outperformed GloVe and XAVIER vectors for goal tracking on both datasets. The gains are particularly robust for noisy DSTC2 data, where both collections of pre-trained vectors consistently outperformed random initialisation. The gains are weaker for the noise-free WOZ 2.0 dataset, which seems to be large (and clean) enough for the NBT model to learn task-specific rephrasings and compensate for the lack of semantic content in the word vectors. For this dataset, GloVe vectors do not improve over the randomly initialised ones. We believe this happens because distributional models keep related, yet antonymous words close together (e.g. north and south, expensive and inexpensive), offsetting the useful semantic content embedded in this vector spaces. 5The NBT-DNN model showed the same trends. For brevity, Table 2 presents only the NBT-CNN figures. Word Vectors DSTC2 WOZ 2.0 Goals Requests Goals Requests XAVIER (No Info.) 64.2 81.2 81.2 90.7 GloVe 69.0* 96.4* 80.1 91.4 Paragram-SL999 73.4* 96.5* 84.2* 91.6 Table 2: DSTC2 and WOZ 2.0 test set performance (joint goals and requests) of the NBT-CNN model making use of three different word vector collections. The asterisk indicates statistically significant improvement over the baseline XAVIER (random) word vectors (paired t-test; p < 0.05). 7 Conclusion In this paper, we have proposed a novel neural belief tracking (NBT) framework designed to overcome current obstacles to deploying dialogue systems in real-world dialogue domains. The NBT models offer the known advantages of coupling Spoken Language Understanding and Dialogue State Tracking, without relying on hand-crafted semantic lexicons to achieve state-of-the-art performance. Our evaluation demonstrated these benefits: the NBT models match the performance of models which make use of such lexicons and vastly outperform them when these are not available. Finally, we have shown that the performance of NBT models improves with the semantic quality of the underlying word vectors. To the best of our knowledge, we are the first to move past intrinsic evaluation and show that semantic specialisation boosts performance in downstream tasks. In future work, we intend to explore applications of the NBT for multi-domain dialogue systems, as well as in languages other than English that require handling of complex morphological variation. Acknowledgements The authors would like to thank Ivan Vuli´c, Ulrich Paquet, the Cambridge Dialogue Systems Group and the anonymous ACL reviewers for their constructive feedback and helpful discussions. 1785 References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Dan Bohus and Alex Rudnicky. 2006. A “k hypotheses + other” belief updating model. In Proceedings of the AAAI Workshop on Statistical and Empirical Methods in Spoken Dialogue Systems. Asli Celikyilmaz and Dilek Hakkani-Tur. 2015. Convolutional Neural Network Based Semantic Tagging with Entity Embeddings. In Proceedings of NIPS Workshop on Machine Learning for Spoken Language Understanding and Interaction. Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493– 2537. Franck Dernoncourt, Ji Young Lee, Trung H. Bui, and Hung H. Bui. 2016. Robust dialog state tracking for large ontologies. In Proceedings of IWSDS. Ondˇrej Duˇsek and Filip Jurˇc´ıˇcek. 2015. Training a Natural Language Generator From Unaligned Data. In Proceedings of ACL. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of NAACL HLT. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of AISTATS. Matthew Henderson, Milica Gaˇsi´c, Blaise Thomson, Pirros Tsiakoulis, Kai Yu, and Steve Young. 2012. Discriminative Spoken Language Understanding Using Word Confusion Networks. In Spoken Language Technology Workshop, 2012. IEEE. Matthew Henderson, Blaise Thomson, and Jason D. Wiliams. 2014a. The Second Dialog State Tracking Challenge. In Proceedings of SIGDIAL. Matthew Henderson, Blaise Thomson, and Jason D. Wiliams. 2014b. The Third Dialog State Tracking Challenge. In Proceedings of IEEE SLT. Matthew Henderson, Blaise Thomson, and Steve Young. 2014c. Robust Dialog State Tracking using Delexicalised Recurrent Neural Networks and Unsupervised Adaptation. In Proceedings of IEEE SLT. Matthew Henderson, Blaise Thomson, and Steve Young. 2014d. Word-Based Dialog State Tracking with Recurrent Neural Networks. In Proceedings of SIGDIAL. Youngsoo Jang, Jiyeon Ham, Byung-Jun Lee, Youngjae Chang, and Kee-Eung Kim. 2016. Neural dialog state tracker for large ontologies by attention mechanism. In Proceedings of IEEE SLT. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A Convolutional Neural Network for Modelling Sentences. In Proceedings of ACL. Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proceedings of EMNLP. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of ICLR. Byung-Jun Lee and Kee-Eung Kim. 2016. Dialog History Construction with Long-Short Term Memory for Robust Generative Dialog State Tracking. Dialogue & Discourse 7(3):47–64. Bing Liu and Ian Lane. 2016a. Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling. In Proceedings of Interspeech. Bing Liu and Ian Lane. 2016b. Joint Online Spoken Language Understanding and Language Modeling with Recurrent Neural Networks. In Proceedings of SIGDIAL. Fei Liu and Julien Perez. 2017. Gated End-to-End Memory Networks. In Proceedings of EACL. F. Mairesse, M. Gasic, F. Jurcicek, S. Keizer, B. Thomson, K. Yu, and S. Young. 2009. Spoken Language Understanding from Unaligned Data using Discriminative Classification Models. In Proceedings of ICASSP. Gr´egoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Dong Yu, and Geoffrey Zweig. 2015. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing 23(3):530–539. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting Word Vectors to Linguistic Constraints. In Proceedings of HLT-NAACL. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multidomain Dialog State Tracking using Recurrent Neural Networks. In Proceedings of ACL. 1786 Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted Boltzmann machines. In Proceedings of ICML. Baolin Peng, Kaisheng Yao, Li Jing, and Kam-Fai Wong. 2015. Recurrent Neural Networks with External Memory for Language Understanding. In Proceedings of the National CCF Conference on Natural Language Processing and Chinese Computing. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of EMNLP. Julien Perez. 2016. Spectral decomposition method of dialog state tracking via collective matrix factorization. Dialogue & Discourse 7(3):34–46. Julien Perez and Fei Liu. 2017. Dialog state tracking, a machine reading approach using Memory Network. In Proceedings of EACL. Christian Raymond and Giuseppe Ricardi. 2007. Generative and discriminative algorithms for spoken language understanding. In Proceedings of Interspeech. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In ICLR. Iman Saleh, Shafiq Joty, Llu´ıs M`arquez, Alessandro Moschitti, Preslav Nakov, Scott Cyphers, and Jim Glass. 2014. A study of using syntactic and semantic structures for concept segmentation and labeling. In Proceedings of COLING. Hongjie Shi, Takashi Ushio, Mitsuru Endo, Katsuyoshi Yamagami, and Noriaki Horii. 2016. Convolutional Neural Networks for Multi-topic Dialog State Tracking. In Proceedings of IWSDS. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research . Pei-Hao Su, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Lina RojasBarahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. 2016a. Continuously learning neural dialogue management. In arXiv preprint: 1606.02689. Pei-Hao Su, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Lina RojasBarahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. 2016b. On-line active reward learning for policy optimisation in spoken dialogue systems. In Proceedings of ACL. Kai Sun, Lu Chen, Su Zhu, and Kai Yu. 2014. The SJTU System for Dialog State Tracking Challenge 2. In Proceedings of SIGDIAL. Kai Sun, Qizhe Xie, and Kai Yu. 2016. Recurrent Polynomial Network for Dialogue State Tracking. Dialogue & Discourse 7(3):65–88. Blaise Thomson and Steve Young. 2010. Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems. Computer Speech and Language . Gokhan Tur, Anoop Deoras, and Dilek Hakkani-Tur. 2013. Semantic Parsing Using Word Confusion Networks With Conditional Random Fields. In Proceedings of Interspeech. Andreas Vlachos and Stephen Clark. 2014. A new corpus and imitation learning framework for contextdependent semantic parsing. TACL 2:547–559. Miroslav Vodol´an, Rudolf Kadlec, and Jan Kleindienst. 2017. Hybrid Dialog State Tracker with ASR Features. In Proceedings of EACL. Ngoc Thang Vu, Pankaj Gupta, Heike Adel, and Hinrich Sch¨utze. 2016. Bi-directional recurrent neural network with ranking loss for spoken language understanding. In Proceedings of ICASSP. Ivan Vuli´c, Nikola Mrkˇsi´c, Roi Reichart, Diarmuid ´O S´eaghdha, Steve Young, and Anna Korhonen. 2017. Morph-fitting: Fine-tuning word vector spaces with simple language-specific rules. In Proceedings of ACL. Wayne Wang. 1994. Extracting Information From Spontaneous Speech. In Proceedings of Interspeech. Zhuoran Wang and Oliver Lemon. 2013. A Simple and Generic Belief Tracking Mechanism for the Dialog State Tracking Challenge: On the believability of observed information. In Proceedings of SIGDIAL. Tsung-Hsien Wen, Milica Gaˇsi´c, Dongho Kim, Nikola Mrkˇsi´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015a. Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking. In Proceedings of SIGDIAL. Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015b. Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems. In Proceedings of EMNLP. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gaˇsi´c, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of EACL. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. TACL 3:345– 358. 1787 Jason D. Williams. 2014. Web-style ranking and SLU combination for dialog state tracking. In Proceedings of SIGDIAL. Jason D. Williams, Antoine Raux, and Matthew Henderson. 2016. The Dialog State Tracking Challenge series: A review. Dialogue & Discourse 7(3):4–33. Jason D. Williams, Antoine Raux, Deepak Ramachandran, and Alan W. Black. 2013. The Dialogue State Tracking Challenge. In Proceedings of SIGDIAL. Jason D. Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Computer Speech and Language 21:393–422. Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. 2014. Spoken language understanding using long short-term memory neural networks. In Proceedings of ASRU. Steve Young, Milica Gaˇsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for POMDP-based spoken dialogue management. Computer Speech and Language 24:150–174. Xiaodong Zhang and Houfeng Wang. 2016. A Joint Model of Intent Determination and Slot Filling for Spoken Language Understanding. In Proceedings of IJCAI. Lukas Zilka and Filip Jurcicek. 2015. Incremental LSTM-based dialog state tracker. In Proceedings of ASRU. 1788
2017
163
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1789–1798 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1164 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1789–1798 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1164 Exploiting Argument Information to Improve Event Detection via Supervised Attention Mechanisms Shulin Liu1,2, Yubo Chen1,2, Kang Liu1 and Jun Zhao1,2 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China 2 University of Chinese Academy of Sciences, Beijing, 100049, China {shulin.liu, yubo.chen, kliu, jzhao}@nlpr.ia.ac.cn Abstract This paper tackles the task of event detection (ED), which involves identifying and categorizing events. We argue that arguments provide significant clues to this task, but they are either completely ignored or exploited in an indirect manner in existing detection approaches. In this work, we propose to exploit argument information explicitly for ED via supervised attention mechanisms. In specific, we systematically investigate the proposed model under the supervision of different attention strategies. Experimental results show that our approach advances state-ofthe-arts and achieves the best F1 score on ACE 2005 dataset. 1 Introduction In the ACE (Automatic Context Extraction) event extraction program, an event is represented as a structure comprising an event trigger and a set of arguments. This work tackles event detection (ED) task, which is a crucial part of event extraction (EE) and focuses on identifying event triggers and categorizing them. For instance, in the sentence “He died in the hospital”, an ED system is expected to detect a Die event along with the trigger word “died”. Besides, the task of EE also includes event argument extraction (AE), which involves event argument identification and role classification. In the above sentence, the arguments of the event include “He”(Role = Person) and “hospital”(Role = Place). However, this paper does not focus on AE and only tackles the former task. According to the above definitions, event arguments seem to be not essentially necessary to ED. However, we argue that they are capable of providing significant clues for identifying and categorizing events. They are especially useful for ambiguous trigger words. For example, consider a sentence in ACE 2005 dataset: Mohamad fired Anwar, his former protege, in 1998. In this sentence, “fired” is the trigger word and the other bold words are event arguments. The correct type of the event triggered by “fired” in this case is End-Position. However, it might be easily misidentified as Attack because “fired” is a multivocal word. In this case, if we consider the phrase “former protege”, which serves as an argument (Role = Position) of the target event, we would have more confidence in predicting it as an End-Position event. Unfortunately, most existing methods performed event detection individually, where the annotated arguments in training set are totally ignored (Ji and Grishman, 2008; Gupta and Ji, 2009; Hong et al., 2011; Chen et al., 2015; Nguyen and Grishman, 2015; Liu et al., 2016a,b; Nguyen and Grishman, 2016). Although some joint learning based methods have been proposed, which tackled event detection and argument extraction simultaneously (Riedel et al., 2009; Li et al., 2013; Venugopal et al., 2014; Nguyen et al., 2016), these approaches usually only make remarkable improvements to AE, but insignificant to ED. Table 1 illustrates our observations. Li et al. (2013) and Nguyen et al. (2016) are state-of-the-art joint models in symbolic and embedding methods for event extraction, respectively. Compared with state-of-the-art pipeline systems, both join1789 Methods ED AE Symbolic Hong’s pipeline (2011) 68.3 48.3 Methods Li’s joint (2013) 67.5 52.7 Embedding Chen’s pipeline (2015) 69.1 53.5 Methods Nguyen’s joint (2016) 69.3 55.4 Table 1: Performances of pipeline and joint approaches on ACE 2005 dataset. The pipeline method in each group was the state-of-the-art system when the corresponding joint method was proposed. t methods achieved remarkable improvements on AE (over 1.9 points), whereas achieved insignificant improvements on ED (less than 0.2 points). The symbolic joint method even performed worse (67.5 vs. 68.3) than pipeline system on ED. We believe that this phenomenon may be caused by the following two reasons. On the one hand, since joint methods simultaneously solve ED and AE, methods following this paradigm usually combine the loss functions of these two tasks and are jointly trained under the supervision of annotated triggers and arguments. However, training corpus contains much more annotated arguments than triggers (about 9800 arguments and 5300 triggers in ACE 2005 dataset) because each trigger may be along with multiple event arguments. Thus, the unbalanced data may cause joint models to favor AE task. On the other hand, in implementation, joint models usually pre-predict several potential triggers and arguments first and then make global inference to select correct items. When pre-predicting potential triggers, almost all existing approaches do not leverage any argument information. In this way, ED does hardly benefit from the annotated arguments. By contrast, the component for pre-prediction of arguments always exploits the extracted trigger information. Thus, we argue that annotated arguments are actually used for AE, not for ED in existing joint methods, which is also the reason we call it an indirect way to use arguments for ED. Contrast to joint methods, this paper proposes to exploit argument information explicitly for ED. We have analyzed that arguments are capable of providing significant clues to ED, which gives us an enlightenment that arguments should be focused on when performing this task. Therefore, we propose a neural network based approach to detect events in texts. And in the proposed approach, we adopt a supervised attention mechanism to achieve this goal, where argument words are expected to acquire more attention than other words. The attention value of each word in a given sentence is calculated by an operation between the current word and the target trigger candidate. Specifically, in training procedure, we first construct gold attentions for each trigger candidate based on annotated arguments. Then, treating gold attentions as the supervision to train the attention mechanism, we learn attention and event detector jointly both in supervised manner. In testing procedure, we use the ED model with learned attention mechanisms to detect events. In the experiment section, we systematically conduct comparisons on a widely used benchmark dataset ACE20051. In order to further demonstrate the effectiveness of our approach, we also use events from FrameNet (FN) (F. Baker et al., 1998) as extra training data, as the same as Liu et al. (2016a) to alleviate the data-sparseness problem for ED to augment the performance of the proposed approach. The experimental results demonstrate that the proposed approach is effective for ED task, and it outperforms state-of-the-art approaches with remarkable gains. To sum up, our main contributions are: (1) we analyze the problem of joint models on the task of ED, and propose to use the annotated argument information explicitly for this task. (2) to achieve this goal, we introduce a supervised attention based ED model. Furthermore, we systematically investigate different attention strategies for the proposed model. (3) we improve the performance of ED and achieve the best performance on the widely used benchmark dataset ACE 2005. 2 Task Description The ED task is a subtask of ACE event evaluations where an event is defined as a specific occurrence involving one or more participants. Event extraction task requires certain specified types of events, which are mentioned 1https://catalog.ldc.upenn.edu/LDC2006T06 1790 in the source language data, be detected. We firstly introduce some ACE terminologies to facilitate the understanding of this task: Entity: an object or a set of objects in one of the semantic categories of interests. Entity mention: a reference to an entity (typically, a noun phrase). Event trigger: the main word that most clearly expresses an event occurrence. Event arguments: the mentions that are involved in an event (participants). Event mention: a phrase or sentence within which an event is described, including the trigger and arguments. The goal of ED is to identify event triggers and categorize their event types. For instance, in the sentence “He died in the hospital”, an ED system is expected to detect a Die event along with the trigger word “died”. The detection of event arguments “He”(Role = Person) and “hospital”(Role = Place) is not involved in the ED task. The 2005 ACE evaluation included 8 super types of events, with 33 subtypes. Following previous work, we treat these simply as 33 separate event types and ignore the hierarchical structure among them. 3 The Proposed Approach Similar to existing work, we model ED as a multi-class classification task. In detail, given a sentence, we treat every token in that sentence as a trigger candidate, and our goal is to classify each of these candidates into one of 34 classes (33 event types plus an NA class). In our approach, every word along with its context, which includes the contextual words and entities, constitute an event trigger candidate. Figure 1 describes the architecture of the proposed approach, which involves two components: (i) Context Representation Learning (CRL), which reveals the representation of both contextual words and entities via attention mechanisms; (ii) Event Detector (ED), which assigns an event type (including the NA type) to each candidate based on the learned contextual representations. 3.1 Context Representation Learning In order to prepare for Context Representation Learning (CRL), we limit the context to a fixed length by trimming longer senFigure 1: The architecture of the proposed approach for event detection. In this figure, w is the candidate word, [w1, ..., wn] is the contextual words of w, and [e1, ..., en] is the corresponding entity types of [w1, ... , wn]. tences and padding shorter sentences with a special token when necessary. Let n be the fixed length and w0 be the current candidate trigger word, then its contextual words Cw is [w−n 2 , w−n 2 +1, ..., w−1, w1, ..., w n 2 −1, w n 2 ]2, and its contextual entities, which is the corresponding entity types (including an NA type) of Cw, is [e−n 2 , e−n 2 +1, ..., e−1, e1, ..., e n 2 −1, e n 2 ]. For convenience, we use w to denote the current word, [w1, w2, ..., wn] to denote the contextual words Cw and [e1, e2, ..., en] to denote the contextual entities Ce in figure 1. Note that, both w, Cw and Ce mentioned above are originally in symbolic representation. Before entering CRL component, we transform them into real-valued vector by looking up word embedding table and entity type embedding table. Then we calculate attention vectors for both contextual words and entities by performing operations between the current word w and its contexts. Finally, the contextual words representation cw and contextual entities representation ce are formed by the weighted sum of the corresponding embeddings of each word and entity in Cw and Ce, respectively. We will give the details in the fol2The current candidate trigger word w0 is not included in the context. 1791 lowing subsections. 3.1.1 Word Embedding Table Word embeddings learned from a large amount of unlabeled data have been shown to be able to capture the meaningful semantic regularities of words (Bengio et al., 2003; Erhan et al., 2010). This paper uses the learned word embeddings as the source of basic features. Specifically, we use the Skip-gram model (Mikolov et al., 2013) to learn word embeddings on the NYT corpus3. 3.1.2 Entity Type Embedding Table The ACE 2005 corpus annotated not only events but also entities for each given sentence. Following existing work (Li et al., 2013; Chen et al., 2015; Nguyen and Grishman, 2015), we exploit the annotated entity information in our ED system. We randomly initialize embedding vector for each entity type (including the NA type) and update it in training procedure. 3.1.3 Representation Learning In this subsection, we illustrate our proposed approach to learn representations of both contextual words and entities, which serve as inputs to the following event detector component. Recall that, we use the matrix Cw and Ce to denote contextual words and contextual entities, respectively. As illustrated in figure 1, the CRL component needs three inputs: the current candidate trigger word w, the contextual words Cw and the contextual entities Ce. Then, two attention vectors, which reflect different aspects of the context, are calculated in the next step. The contextual word attention vector αw is computed based on the current word w and its contextual words Cw. We firstly transform each word wk (including w and every word in Cw) into a hidden representation wk by the following equation: wk = f(wk  Ww) (1) where f(·) is a non-linear function such as the hyperbolic tangent, and Ww is the transformation matrix. Then, we use the hidden representations to compute the attention value for each 3https://catalog.ldc.upenn.edu/LDC2008T19 word in Cw: αk w = exp(w  wT k ) ∑ i exp(w  wT i ) (2) The contextual entity attention vector αe is calculated with a similar method to αw. αk e = exp(we  eT k ) ∑ i exp(we  eT i ) (3) Note that, we do not use the entity information of the current candidate token to compute the attention vector. The reason is that only a small percentage of true event triggers are entities4. Therefore, the entity type of a candidate trigger is meaningless for ED. Instead, we use we, which is calculated by transforming w from the word space into the entity type space, as the attention source. We combine αw and αe to obtain the final attention vector, α = αw+αe. Finally, the contextual words representation cw and the contextual entities representation ce are formed by weighted sum of Cw and Ce, respectively: cw = CwαT (4) ce = CeαT (5) 3.2 Event Detector As illustrated in figure 1, we employ a threelayer (an input layer, a hidden layer and a softmax output layer) Artificial Neural Networks (ANNs) (Hagan et al., 1996) to model the ED task, which has been demonstrated very effective for event detection by Liu et al. (2016a). 3.2.1 Basic ED Model Given a sentence, as illustrated in figure 1, we concatenate the embedding vectors of the context (including contextual words and entities) and the current candidate trigger to serve as the input to ED model. Then, for a given input sample x, ANN with parameter θ outputs a vector O, where the i-th value oi of O is the confident score for classifying x to the i-th event type. To obtain the conditional probability p(i|x, θ), we apply a softmax operation over all event types: p(i|x, θ) = eoi ∑m k=1 eok (6) 4Only 10% of triggers in ACE 2005 are entities. 1792 Given all of our (suppose T) training instances (x(i); y(i)), we can then define the negative loglikelihood loss function: J(θ) = − T ∑ i=1 log p(y(i)|x(i), θ) (7) We train the model by using a simple optimization technique called stochastic gradient descent (SGD) over shuffled mini-batches with the Adadelta rule (Zeiler, 2012). Regularization is implemented by a dropout (Kim, 2014; Hinton et al., 2012) and L2 norm. 3.2.2 Supervised Attention In this subsection, we introduce supervised attention to explicitly use annotated argument information to improve ED. Our basic idea is simple: argument words should acquire more attention than other words. To achieve this goal, we first construct vectors using annotated arguments as the gold attentions. Then, we employ them as supervision to train the attention mechanism. Constructing Gold Attention Vectors Our goal is to encourage argument words to obtain more attention than other words. To achieve this goal, we propose two strategies to construct gold attention vectors: S1: only pay attention to argument words. That is, all argument words in the given context obtain the same attention, whereas other words get no attention. For candidates without any annotated arguments in context (such as negative samples), we force all entities to average the whole attention. Figure 2 illustrates the details, where α∗is the final gold attention vector. Figure 2: An example of S1 to construct gold attention vector. The word fired is the trigger candidate, and underline words are arguments of fired annotated in the corpus. S2: pay attention to both arguments and the words around them. The assumption is that, not only arguments are important to ED, the words around them are also helpful. And the nearer a word is to arguments, the more attention it should obtain. Inspired by Mi et al. (2016), we use a gaussian distribution g(·) to model the attention distribution of words around arguments. In detail, given an instance, we first obtain the raw attention vector α in the same manner as S1 (see figure 2). Then, we create a new vector α ′ with all points initialized with zero, and for each αi = 1, we update α ′ by the following algorithm: Algorithm 1: Updating α ′ for k ∈{−w, ..., 0, ..., w} do α ′ i+k = α ′ i+k + g(|k|, µ, σ) end where w is the window size of the attention mechanism and µ, σ are hyper-parameters of the gaussian distribution. Finally, we normalize α ′ to obtain the target attention vector α∗. Similar with S1, we treat all entities in the context as arguments if the current candidate does not has any annotated arguments (such as netative samples). Jointly Training ED and Attention Given the gold attention α∗(see subsection 3.2.2) and the machine attention α produced by our model (see subsection 3.1.3), we employ the square error as the loss function of attentions: D(θ) = T ∑ i=1 n ∑ j=1 (α∗i j −αi j)2 (8) Combining equation 7 and equation 8, we define the joint loss function of our proposed model as follows: J ′(θ) = J(θ) + λD(θ) (9) where λ is a hyper-parameter for trade-offbetween J and D. Similar to basic ED model, we minimize the loss function J ′(θ) by using SGD over shuffled mini-batches with the Adadelta update rule. 4 Experiments 4.1 Dataset and Experimental Setup Dataset We conducted experiments on ACE 2005 dataset. For the purpose of comparison, we fol1793 lowed the evaluation of (Li et al., 2013; Chen et al., 2015; Liu et al., 2016b): randomly selected 30 articles from different genres as the development set, and subsequently conducted a blind test on a separate set of 40 ACE 2005 newswire documents. We used the remaining 529 articles as our training set. Hyper-parameter Setting Hyper-parameters are tuned on the development dataset. We set the dimension of word embeddings to 200, the dimension of entity type embeddings to 50, the size of hidden layer to 300, the output size of word transformation matrix Ww in equation 1 to 200, the batch size to 100, the hyper-parameter for the L2 norm to 10−6 and the dropout rate to 0.6. In addition, we use the standard normal distribution to model attention distributions of words around arguments, which means that µ = 0.0, σ = 1.0, and the window size is set to 3 (see Subsection 3.2.2). The hyper-parameter λ in equation 9 is various for different attention strategies, we will give its setting in the next section. 4.2 Correctness of Our Assumption In this section, we conduct experiments on ACE 2005 corpus to demonstrate the correctness of our assumption that argument information is crucial to ED. To achieve this goal, we design a series of systems for comparison. ANN is the basic event detection model, in which the hyper-parameter λ is set to 0. This system does not employ argument information and computes attentions without supervision (see Subsection 3.1.3). ANN-ENT assigns λ with 0, too. The difference is that it constructs the attention vector α by forcing all entities in the context to average the attention instead of computing it in the manner introduced in Subsection 3.1.3. Since all arguments are entities, this system is designed to investigate the effects of entities. ANN-Gold1 uses the gold attentions constructed by strategy S1 in both training and testing procedure. ANN-Gold2 is akin to ANN-Gold1, but uses the second strategy to construct its gold attentions. Note that, in order to avoid the interference of attention mechanisms, the last two systems are designed to use argument information (via gold attentions) in both training and testing procedure. Thus both ANN-Gold1 and ANNGold2 assign λ with 0. Methods P R F1 ANN 69.9 60.8 65.0 ANN-ENT 79.4 60.7 68.8 ANN-Gold1† 81.9 65.1 72.5 ANN-Gold2† 81.4 66.9 73.4 Table 2: Experimental results on ACE 2005 corpus. † designates the systems that employ argument information. Table 2 compares these systems on ACE 2005 corpus. From the table, we observe that systems with argument information (the last two systems) significantly outperform systems without argument information (the first two systems), which demonstrates that argument information is very useful for this task. Moreover, since all arguments are entities, for preciseness we also investigate that whether ANN-Gold1/2 on earth benefits from entities or arguments. Compared with ANN-ENT (revising that this system only uses entity information), ANN-Gold1/2 performs much better, which illustrates that entity information is not enough and further demonstrates that argument information is necessary for ED. 4.3 Results on ACE 2005 Corpus In this section, we conduct experiments on ACE 2005 corpus to demonstrate the effectiveness of the proposed approach. Firstly, we introduce systems implemented in this work. ANN-S1 uses gold attentions constructed by strategy S1 as supervision to learn attention. In our experiments, λ is set to 1.0. ANN-S2 is akin to ANN-S1, but use strategy S2 to construct gold attentions and the hyper-parameter λ is set to 5.0. These two systems both employ supervised attention mechanisms. For comparison, we use an unsupervised-attention system ANN as our baseline, which is introduced in Subsection 4.2. In addition, we select the following state-ofthe-art methods for comparison. 1). Li’s joint model (Li et al., 2013) extracts events based on structure prediction. It is the best structure-based system. 1794 Methods P R F1 Li’s joint model (2013) 73.7 62.3 67.5 Liu’s PSL (2016) 75.3 64.4 69.4 Liu’s FN-Based (2016) 77.6 65.2 70.7 Ngyuen’s joint (2016) 66.0 73.0 69.3 Skip-CNN (2016) N/A 71.3 ANN 69.9 60.8 65.0 ANN-S1† 81.4 62.4 70.8 ANN-S2† 78.0 66.3 71.7 Table 3: Experimental results on ACE 2005. The first group illustrates the performances of state-of-the-art approaches. The second group illustrates the performances of the proposed approach. † designates the systems that employ arguments information. 2). Liu’s PSL (Liu et al., 2016b) employs both latent local and global information for event detection. It is the best-reported featurebased system. 3). Liu’s FN-Based approach (Liu et al., 2016a) leverages the annotated corpus of FrameNet to alleviate data sparseness problem of ED based on the observation that frames in FN are analogous to events in ACE. 4). Ngyen’s joint model (Nguyen et al., 2016) employs a bi-directional RNN to jointly extract event triggers and arguments. It is the best-reported representation-based joint approach proposed on this task. 5). Skip-CNN (Nguyen and Grishman, 2016) introduces the non-consecutive convolution to capture non-consecutive k-grams for event detection. It is the best reported representation-based approach on this task. Table 3 presents the experimental results on ACE 2005 corpus. From the table, we make the following observations: 1). ANN performs unexpectedly poorly, which indicates that unsupervised-attention mechanisms do not work well for ED. We believe the reason is that the training data of ACE 2005 corpus is insufficient to train a precise attention in an unsupervised manner, considering that data sparseness is an important issue of ED (Zhu et al., 2014; Liu et al., 2016a). 2). With argument information employed via supervised attention mechanisms, both ANN-S1 and ANN-S2 outperform ANN with remarkable gains, which illustrates the effectiveness of the proposed approach. 3). ANN-S2 outperforms ANN-S1, but the latter achieves higher precision. It is not difficult to understand. On the one hand, strategy S1 only focuses on argument words, which provides accurate information to identify event type, thus ANN-S1 could achieve higher precision. On the other hand, S2 focuses on both arguments and words around them, which provides more general but noised clues. Thus, ANN-S2 achieves higher recall with a little loss of precision. 4). Compared with state-of-the-art approaches, our method ANN-S2 achieves the best performance. We also perform a t-test (p ⩽0.05), which indicates that our method significantly outperforms all of the compared methods. Furthermore, another noticeable advantage of our approach is that it achieves much higher precision than state-of-the-arts. 4.4 Augmentation with FrameNet Recently, Liu et al. (2016a) used events automatically detected from FN as extra training data to alleviate the data-sparseness problem for event detection. To further demonstrate the effectiveness of the proposed approach, we also use the events from FN to augment the performance of our approach. In this work, we use the events published by Liu et al. (2016a)5 as extra training data. However, their data can not be used in the proposed approach without further processing, because it lacks of both argument and entity information. Figure 3 shows several examples of this data. Figure 3: Examples of events detected from FrameNet (published by Liu et al. (2016a)). Processing of Events from FN Liu et al. (2016a) detected events from FrameNet based on the observation that frames in FN are analogous to events in ACE 5https://github.com/subacl/acl16 1795 (lexical unit of a frame ↔trigger of an event, frame elements of a frame ↔arguments of an event). All events they published are also frames in FN. Thus, we treat frame elements annotated in FN corpus as event arguments. Since frames generally contain more frame elements than events, we only use core6 elements in this work. Moreover, to obtain entity information, we use RPI Joint Information Extraction System7 (Li et al., 2013, 2014; Li and Ji, 2014) to label ACE entity mentions. Experimental Results We use the events from FN as extra training data and keep the development and test datasets unchanged.Table 4 presents the experimental results. Methods P R F1 ANN 69.9 60.8 65.0 ANN-S1 81.4 62.4 70.8 ANN-S2 78.0 66.3 71.7 ANN +FrameNet 72.5 61.7 66.7 ANN-S1 +FrameNet 80.1 63.6 70.9 ANN-S2 +FrameNet 76.8 67.5 71.9 Table 4: Experimental results on ACE 2005 corpus. “+FrameNet” designates the systems that are augmented by events from FrameNet. From the results, we observe that: 1). With extra training data, ANN achieves significant improvements on F1 measure (66.7 vs. 65.0). This result, to some extent, demonstrates the correctness of our assumption that the data sparseness problem is the reason that causes unsupervised attention mechanisms to be ineffective to ED. 2). Augmented with external data, both ANN-S1 and ANN-S2 achieve higher recall with a little loss of precision. This is to be expected. On the one hand, more positive training samples consequently make higher recall. On the other hand, the extra event samples are automatically extracted from FN, thus false-positive samples are inevitable to be involved, which may result in hurting the precision. Anyhow, with events from FN, our approach achieves higher F1 score. 6FrameNet classifies frame elements into three groups: core, peripheral and extra-thematic. 7http://nlp.cs.rpi.edu/software/ 5 Related Work Event detection is an increasingly hot and challenging research topic in NLP. Generally, existing approaches could roughly be divided into two groups. The first kind of approach tackled this task under the supervision of annotated triggers and entities, but totally ignored annotated arguments. The majority of existing work followed this paradigm, which includes feature-based methods and representationbased methods. Feature-based methods exploited a diverse set of strategies to convert classification clues (i.e., POS tags, dependency relations) into feature vectors (Ahn, 2006; Ji and Grishman, 2008; Patwardhan and Riloff, 2009; Gupta and Ji, 2009; Liao and Grishman, 2010; Hong et al., 2011; Liu et al., 2016b). Representation-based methods typically represent candidate event mentions by embeddings and feed them into neural networks (Chen et al., 2015; Nguyen and Grishman, 2015; Liu et al., 2016a; Nguyen and Grishman, 2016). The second kind of approach, on the contrast, tackled event detection and argument extraction simultaneously, which is called joint approach (Riedel et al., 2009; Poon and Vanderwende, 2010; Li et al., 2013, 2014; Venugopal et al., 2014; Nguyen et al., 2016). Joint approach is proposed to capture internal and external dependencies of events, including trigger-trigger, argument-argument and trigger-argument dependencies. Theoretically, both ED and AE are expected to benefit from joint methods because triggers and arguments are jointly considered. However, in practice, existing joint methods usually only make remarkable improvements to AE, but insignificant to ED. Different from them, this work investigates the exploitation of argument information to improve the performance of ED. 6 Conclusions In this work, we propose a novel approach to model argument information explicitly for ED via supervised attention mechanisms. Besides, we also investigate two strategies to construct gold attentions using the annotated arguments. To demonstrate the effectiveness of the proposed method, we systematically conduc1796 t a series of experiments on the widely used benchmark dataset ACE 2005. Moreover, we also use events from FN to augment the performance of the proposed approach. Experimental results show that our approach outperforms state-of-the-art methods, which demonstrates that the proposed approach is effective for event detection. Acknowledgments This work was supported by the Natural Science Foundation of China (No. 61533018) and the National Basic Research Program of China (No. 2014CB340503). And this research work was also supported by Google through focused research awards program. References David Ahn. 2006. Proceedings of the workshop on annotating and reasoning about time and events. Association for Computational Linguistics, pages 1–8. http://aclweb.org/anthology/W06-0901. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research 3:1137–1155. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 167–176. https://doi.org/10.3115/v1/P15-1017. Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research 11:625–660. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In COLING 1998 Volume 1: The 17th International Conference on Computational Linguistics. http://aclweb.org/anthology/C98-1013. Prashant Gupta and Heng Ji. 2009. Predicting unknown time arguments based on crossevent propagation. In Proceedings of the ACLIJCNLP 2009 Conference Short Papers. Association for Computational Linguistics, pages 369– 372. http://aclweb.org/anthology/P09-2093. Martin T Hagan, Howard B Demuth, Mark H Beale, et al. 1996. Neural network design. Pws Pub. Boston. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 https://arxiv.org/abs/1207.0580. Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 1127–1136. http://aclweb.org/anthology/P11-1113. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of ACL-08: HLT. Association for Computational Linguistics, pages 254–262. http://aclweb.org/anthology/P08-1030. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. pages 1746–1751. http://www.anthology.aclweb.org/D14-1181. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 402–412. https://doi.org/10.3115/v1/P14-1038. Qi Li, Heng Ji, Yu HONG, and Sujian Li. 2014. Constructing information networks using one single model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1846–1851. https://doi.org/10.3115/v1/D14-1198. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 73–82. http://aclweb.org/anthology/P13-1008. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 789–797. http://aclweb.org/anthology/P10-1081. 1797 Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016a. Leveraging framenet to improve automatic event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 2134–2143. https://doi.org/10.18653/v1/P16-1201. Shulin Liu, Kang Liu, Shizhu He, and Jun Zhao. 2016b. A probabilistic soft logic based approach to exploiting latent and global information in event classification. In Proceedings of the thirtieth AAAI Conference on Artificail Intelligence. pages 2993–2999. http://www.aaai.org/ocs/index.php/AAAI /AAAI16/paper/view/11990/12052. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. arXiv preprint arXiv:1608.00112 https://arxiv.org/abs/1608.00112. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 https://arxiv.org/abs/1301.3781. Huu Thien Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 300–309. https://doi.org/10.18653/v1/N16-1034. Huu Thien Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, pages 365– 371. https://doi.org/10.3115/v1/P15-2060. Huu Thien Nguyen and Ralph Grishman. 2016. Modeling skip-grams for event detection with convolutional neural networks. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 886–891. http://aclweb.org/anthology/D16-1085. Siddharth Patwardhan and Ellen Riloff. 2009. A unified model of phrasal and sentential evidence for information extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 151–160. http://aclweb.org/anthology/D09-1016. Hoifung Poon and Lucy Vanderwende. 2010. Joint inference for knowledge extraction from biomedical literature. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 813–821. http://aclweb.org/anthology/N10-1123. Sebastian Riedel, Hong-Woo Chun, Toshihisa Takagi, and Jun’ichi Tsujii. 2009. Proceedings of the bionlp 2009 workshop companion volume for shared task. Association for Computational Linguistics, pages 41–49. http://aclweb.org/anthology/W09-1406. Deepak Venugopal, Chen Chen, Vibhav Gogate, and Vincent Ng. 2014. Relieving the computational bottleneck: Joint inference for event extraction with high-dimensional features. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 831–843. https://doi.org/10.3115/v1/D14-1090. Matthew D Zeiler. 2012. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701 https://arxiv.org/abs/1212.5701. Zhu Zhu, Shoushan Li, Guodong Zhou, and Rui Xia. 2014. Bilingual event extraction: a case study on trigger type determination. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 842–847. https://doi.org/10.3115/v1/P14-2136. 1798
2017
164
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1799–1809 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1165 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1799–1809 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1165 Topical Coherence in LDA-based Models through Induced Segmentation Hesam Amoualian Univ. Grenoble Alps, CNRS, Grenoble INP - LIG [email protected] Wei Lu Singapore University of Technology and Design [email protected] Eric Gaussier Univ. Grenoble Alps, CNRS, Grenoble INP - LIG [email protected] Georgios Balikas Univ. Grenoble Alps, CNRS, Grenoble INP - LIG [email protected] Massih-Reza Amini Univ. Grenoble Alps, CNRS, Grenoble INP - LIG [email protected] Marianne Clausel Univ. Grenoble Alps, CNRS, Grenoble INP - LJK [email protected] Abstract This paper presents an LDA-based model that generates topically coherent segments within documents by jointly segmenting documents and assigning topics to their words. The coherence between topics is ensured through a copula, binding the topics associated to the words of a segment. In addition, this model relies on both document and segment specific topic distributions so as to capture fine grained differences in topic assignments. We show that the proposed model naturally encompasses other state-of-the-art LDA-based models designed for similar tasks. Furthermore, our experiments, conducted on six different publicly available datasets, show the effectiveness of our model in terms of perplexity, Normalized Pointwise Mutual Information, which captures the coherence between the generated topics, and the Micro F1 measure for text classification. 1 Introduction Since the seminal works of Hofmann (1999) and Blei et al. (2003), there have been several developments in probabilistic topic models. Many extensions have indeed been proposed for different applications, including ad-hoc information retrieval (Wei and Croft, 2006), clustering search results (Zeng et al., 2004) and driving faceted browsing (Mimno and McCallum, 2007). In most of these studies, the initial exchangeability assumptions of PLSA and LDA, stipulating that words within a document are interdependent, has led to incoherent topic assignments within semantically meaningful text units, even though the importance of having topically coherent phrases is generally admitted (Griffiths et al., 2005). More recently, (Balikas et al., 2016b) has shown that binding topics, so as to obtain more coherent topic assignments, within such text segments as noun phrases improves the performance (e.g. in terms of perplexity) of LDAbased models. The question nevertheless remains as to which segmentation one should rely on. Furthermore, text segments can refer to topics that are barely present in other parts of the document. For example, the segment “the Kurdish regional capital” in the sentence1 “A thousand protesters took to the main street in Erbil, the Kurdish regional capital, to condemn a new law requiring all public demonstrations to have government permits.” refers to geography in a document that is mainly devoted to politics. Relying on a single topic distribution, as done in most previous studies including (Balikas et al., 2016b), may prevent one from capturing those segment specific topics. In this paper, we propose a novel LDA-based model that automatically segments documents into topically coherent sequences of words. The coherence between topics is ensured through copulas (Elidan, 2013) that bind the topics associated to the words of a segment. In addition, this model relies on both document and segment specific topic distri1This sentence is taken from New York Times news (NYT) collection described in Section 4. 1799 butions so as to capture fine grained differences in topic assignments. A simple switching mechanism is used to select the appropriate distribution (document or segment specific) for assigning a topic to a word. We show that this model naturally encompasses other state-of-the-art LDA-based models proposed to accomplish the same task, and that it outperforms these models over six publicly available collections in terms of perplexity, Normalized Pointwise Mutual Information (NPMI), a measure used to assess the coherence of topics with documents, and the Micro F1-measure in a text classification context. 2 Related work Probabilistic Latent Semantic Analysis (PLSA) proposed by (Hofmann, 1999) is the first probabilistic model that explains the generation of cooccurrence data using latent random topics and, the EM algorithm for parameter estimation. The model was found more flexible and scalable than the Latent Semantic Analysis (Deerwester et al., 1990), which is based on the singular value decomposition of the document-term matrix, however PLSA is not a generative model as parameter estimation should be performed at each addition of new documents. To overcome this drawback, Blei et al. (2003) proposed the Latent Dirichlet Allocation (LDA) by assuming that the latent topics are random variables sampled from a Dirichlet distribution and that the generated words, occurring in a document, are exchangeable. The interdependence assumption allows the parameter estimation and the inference of the LDA model to be carried out efficiently, but it is not realistic in the sense that topics assigned to similar words of a text span are generally incoherent. Different studies, presented in the following sections, attempted to remedy this problem and they can be grouped in two broad families depending on whether they make use of external knowledgebased tools or not in order to exhibit text structure for word-topic assignment. 2.1 Knowledge-based topic assignments The main assumption behind these models are that text-spans such as sentences, phrases or segments are related in their content. Therefore, the integration of these dependent structures can help to discover coherent latent topics for words. Different attempts to combine LDA-based models with statistical tools to discover document structures have been successfully proposed, such as the study of Griffiths et al. (2005) who investigated the effect of combining a Hidden Markov Model with LDA to capture long and short distance dependencies. Similarly, (Boyd-Graber and Blei, 2008; Balikas et al., 2016a,b) integrated text structure exhibited by a parser or a chunker in their topic models. In this line, Du et al. (2013) following (Du et al., 2010) presented a hierarchical Bayesian model for unsupervised topic segmentation. This model integrates a boundary sampling method used in a Bayesian segmentation model introduced by Purver et al.(2006) to the topic model. For inference, a non-parametric Markov Chain inference is used that splits and merges the segments while a PitmanYor process (Teh, 2006) binds the topics. Recently, Tamura and Sumita (2016) extended this idea to the bilingual setting. They assume that documents consist of segments and the topic distribution of each segment is generated using a Pitman-Yor process (Teh, 2006). Though, the topic assignments follow the structure of the text; these models suffer from the bias of statistical or linguistic tools they rely on. To overpass this limitation, other systems integrated automatically the extraction of text structure, in the form of phrases, in their process. 2.2 Knowledge-free topic assignments This type of models extract text-spans using ngram counts and word collections and use bigrams to integrate the order of words as well as to capture the topical content of a phrase (Lau et al., 2013). In (Wang et al., 2007), depending on the topic a particular bigram can be either considered as a single token or as two unigrams. Further, Wang et al. (2009) merged topic models with a unigram model over sentences that assigns topics to the sentences instead of the words. Our proposed approach also does not make use of external statistical tools to find text segments. The main difference with the previous knowledgefree topic model approaches is that the proposed approach assigns topics to words based on two, segment-specific and document-specific distributions selected from a Bernoulli law. Topics within segments are then constrained using copulas that bind their distributions. In this way, segmentation is embedded in the model and it naturally comes along with the topic assignment. 1800 α θd z1 zn w1 wn φ β λ |S| D K ... ... (a) copLDA α θd z1 zn w1 wn S |Sd| φ β λ |S| D K . . . . . . (b) segLDAcopp=0 α θd fn θd,s,n p θs zn wn S |Sd| φ β |S| D K (c) segLDAcopλ=0 α θd f1 fn θd,s,1 θd,s,n p θs z1 zn w1 wn S |Sd| φ β λ |S| D K ... ... ... ... (d) segLDAcop Figure 1: Graphical model for Copula LDA (copLDA), extension of Copula LDA with segmentation (segLDAcopp=0), LDA with segmentation and topic shift (segLDAcopλ=0) and complete model (segLDAcop). 3 Joint latent model for topics and segments We define here a segment as a topically coherent sequence of contiguous words. By topically coherent, we mean that, even though words in a segment can be associated to different topics, these topics are usually related. This view is in line with the one expressed in (Balikas et al., 2016b), in which a latent topic model, referred to as copLDA in the remainder, includes a binding mechanism between topics within coherent text spans, defined in their study as noun phrases (NPs). The relation between topics is captured through a copula that provides a joint probability for all the topics used in a segment. That is, to generate words in a segment, one first jointly generates all the word specific topics z via a copula, and then generates each word in the segment from its word specific topic and the word-topic distribution φ. Figure 1(a) illustrates this. Copulas are particularly useful when modeling dependencies between random variables, as the joint cumulative distribution function (CDF) FX1,··· ,Xn of any random vector X = (X1, · · · , Xn) can be written as a function of its marginals, according to Sklar’s Theorem (Nelsen, 2006): FX1,··· ,Xn(x1, · · · , xp) = C(FX1(x1), · · · , FXn(xn)) where C is a copula. For latent topic models, as discussed in (Amoualian et al., 2016), Frank’s copula is particularly interesting as (a) it is invariant by permutations and associative, as are the words and topics z in each segment due to the exchangeability assumption, and (b) it relies on a single parameter (denoted λ here) that controls the strength of dependence between the variables and is thus easy to implement. In Frank’s copula, when the parameter λ approaches 0, the variables are independent of each other, whereas when λ approaches +∞, the variables take the same value. For further details on copulas, we refer the reader to (Nelsen, 2006). One important problem, however, with copLDA is its reliance on a predefined segmentation. Although the information brought by the segmentation based on NPs helps to improve topic assignment, it may not be flexible enough to capture all the possible segments of a text. It is easy to correct this problem by considering all possible segmentations of a document and by choosing the most appropriate one at the same time that topics are assigned to words. This is illustrated in Figure 1(b), where a segmentation S is chosen from the set Sd of possible segmentations for a document d, and where each segment in S are generated in turn. We refer to the associated model as segLDAcopp=0 for reasons that will become clear later. Another point to be noted about copLDA (and 1801 segLDAcopp=0) is that the topics used in each segment come from the same document specific topic distribution θd. This entails that, in these models, one cannot differentiate the main topics of a document from potential segment specific topics that can explain some parts of it. Indeed, some text segments can refer to topics that are barely present in other parts of the document; relying on a single topic distribution may prevent one from capturing those segment specific topics. It is possible to overcome this difficulty by generating a segment specific topic distribution as illustrated in Figure 1(c) (this model is referred to as segLDAcopλ=0, again for reasons that will become clear later). However, as some words in a segment can be associated to the general topics of a document, we introduce a mechanism to choose, for each word in a segment, a topic either from the segment specific topic distribution θs or from the document specific topic distribution θd (this mechanism is similar to the one used for routes and levels in (Paul and Girju, 2010)). The choice between them is based on the Bernoulli variable f, as explained in the generative story given below. The above developments can be combined in a single, complete model, illustrated in Figure 1(d) and detailed below. We will simply refer to this model as segLDAcop. 3.1 Complete generative model As in standard LDA based models, with V denoting the size of the vocabulary of the collection and K the number of latent topics, β and φk, 1 ≤k ≤K, are V dimensional vectors, α and θ (i.e., θd, θs, θd,s,n) are K dimensional vectors, whereas zn takes value in {1, · · · , K}. Lower indices are used to denote coordinates of the above vectors. Lastly, Dir denotes the Dirichlet distribution, Cat the categorical distribution (which is a multinomial distribution with one draw) and we omit, as is usual, the generation of the length of the document. The complete model segLDAcop is then based on the following generative process: 1. Generate, for each topic k, 1 ≤k ≤K, a distribution over the words: φk ∼Dir(β); 2. For each document d, 1 ≤d ≤D: (a) Choose a document specific topic distribution: θd ∼Dir(α); (b) Choose a segmentation S of the document uniformly from the set of all possible segmentations Sd: P(S) = 1 |Sd|; (c) For each segment s in S: (i) Choose a segment specific topic distribution: θs ∼Dir(α); (ii) For each position n in s, choose fn ∼ Ber(p) and set: θd,s,n =  θs if fn = 1 θd otherwise (iii) Choose topics Zs = {z1, . . . , zn} from Frank’s copula with parameter λ and marginals Cat(θd,s,n); (iv) For each position n in s, choose word wn: wn ∼Cat(φzn). As on can note, the generative process relies on a segmentation uniformly chosen from the set of possible segmentations (step 2.b) to generate related topics within each segment (Frank’s copula in step 2.c.(iii)), the distribution underlying each word specific topic zn being either specific to the segment or general to the document (steps 2.c.(i) and 2.c.(ii)). The other steps are similar to the standard LDA steps. As in almost all previous studies on LDA, α and β are considered fixed and symmetric, each coordinate of the vector being equal: α1 = · · · = αK. The hyperparameters p (∈[0, 1]) of the Bernoulli distribution and λ (∈[0, +∞]) of Frank’s copula respectively regulate the choice between the segment specific and the document specific topic distributions and the strength of the dependence between topics in a segment. As for the other hyperparameters, we consider them fixed here (the values for all hyperparameters are given in Section 4). As mentioned before, all the models presented in Figure 1 are special cases of the complete model segLDAcop: hence segLDAcopλ=0 is obtained by dropping the topic dependencies, which amounts to setting λ to (a value close to) 0, segLDAcopp=0 is obtained by relying only on the topic distribution obtained for the document, which amounts to setting p to 0, and the previously introduced copLDA model is obtained by setting p to 0, and fixing the segmentation. 3.2 Inference with Gibbs sampling The parameters of the complete model can be directly estimated through Gibbs sampling. The Gibbs updates for the parameters φ and θ are the same as the ones for standard LDA (Blei et al., 1802 2003). The parameters fn are directly estimated through: fn ∼Ber(p). Lastly, for the variables z, we follow the same strategy as the one described in (Balikas et al., 2016b) and based on (Amoualian et al., 2016), leading to: P(Zs|Z−s, W, Θ, Φ, λ) = p(Zs|Θ, λ) Y n φzn wn where W denotes the document collection, and Θ and Φ the sets of all θ and φk, 1 ≤k ≤K, vectors. p(Zs|Θ, λ) is obtained by Frank’s copula with parameter λ and marginals Cat(θd,s,n). As is standard in topic models, the notation −s means excluding the information from s. From the above equation, one can formulate an acceptance/rejection algorithm based on the following steps: (a) sample Zs from p(Zs|Θ, λ) using Frank’s copula, and (b) accept the sample with probability Q n φzn wn, where n runs over all the positions in segment s. 3.3 Efficient segmentation As topics may change from one sentence to another, we assume here that segments cannot overlap sentence boundaries. The different segmentations of a document are thus based on its sentence segmentations. In the remainder, we use L to denote the maximum length of a segment and g(M; L) to denote the number of segmentations in a sentence of length M, each segment comprising at most L words. Generating all possible segmentations of a sentence and then selecting one at random is not an efficient process as the number of segments rapidly grows with the length of the sentence. In practice, however, one can define an efficient segmentation on the basis of the following proposition, the proof of which is given in Appendix A: Proposition 3.1. Let ls i be the random variable associated to the length of the segment starting at position i in a sentence of length M (positions go from 1 to M and ls i takes value in {1, · · · , L}). Then P(ls i = l) := g(M+1−i−l);L) g(M+1−i;L) defines a probability distribution over ls i . Furthermore, the following process is equivalent to choosing sentence segmentations uniformly from the set of possible segmentations. From pos. 1, repeat till end of sentence: (a) Generate segment length acc. to P; (b) Add segment to current segmentation; (c) Move to position after the segment. In practice, we thus replace steps 2.b and 2.c of the generative story by a loop over all sentences, and in each sentence use the process described in Prop, 3.1. Furthermore, as described in Appendix A, the values of g needed to compute P(ls i = l) can be efficiently computed by recurrence. 4 Experiments We conducted a number of experiments aimed at studying the impact of simultaneously segmenting and assigning topics to words within segments using the proposed segLDAcop model. Datasets: We considered six publicly available datasets derived from Pubmed2 (Tsatsaronis et al., 2015), Wikipedia (Partalas et al., 2015), Reuters3 and New York Times (NYT)4 (Yao et al., 2016). The first two collections were considered in (Balikas et al., 2016a), we followed their setup by considering 3 subsets of Wikipedia with different number of classes (namely, Wiki0, Wiki1 and Wiki2). The Reuters dataset comes from Reuters-21578, Distribution 1.0 as investigated in (Bird et al., 2009) and the NYT dataset is collected from full text of New York Times global news, from January 1st to December 31st, 2011. These collections were processed following (Blei et al., 2003) by removing a standard list of 50 stop words, lemmatizing, lowercasing and keeping only words made of letters. To deal with relatively homogeneous collections, we also removed documents that are too long. The statistics of these datasets, as well as the admissible maximal length for documents, in terms of the number of words they contain, can be found in Table 1. Settings: We compared our models (segLDAcopp=0, segLDAcopλ=0, segLDAcop) with three models, namely the standard LDA model, and two previously introduced models aiming at binding topics within segments: 1. LDA: Standard Latent Dirichlet Allocation implemented using collapsed Gibbs sampling inference (Griffiths and Steyvers, 2004)5. Note 2https://github.com/balikasg/ topicModelling/tree/master/data 3https://archive.ics.uci.edu/ ml/datasets/Reuters-21578+Text+ Categorization+Collection 4https://github.com/yao8839836/COT/ tree/master/data 5http://gibbslda.sourceforge.net 1803 Wiki0 Wiki1 Wiki2 # words 32,354 70,954 103,308 – vocabulary size 7,853 12,689 14,715 # docs 1,014 2,138 3,152 – maximal length 100 100 100 # labels 17 42 53 Pubmed Reuters NYT # words 104,683 192,562 237,046 – vocabulary size 12,779 10,479 17,773 # docs 2,059 6,708 2,564 – maximal length 75 50 200 # labels 50 83 Table 1: Dataset statistics. that there are neither segmentation nor topic binding mechanisms in this model; 2. senLDA: Sentence LDA, introduced in (Balikas et al., 2016a), which forces all words within a sentence to be assigned to the same topic. The segments considered thus correspond to sentences, and the binding between topics within segments is maximal as all word specific topics are equal; 3. copLDA: Copula LDA, introduced in (Balikas et al., 2016b) already discussed before, which relies on two types of segments, namely NPs (extracted with the nltk.chunk package (Bird et al., 2009)) and single words. In addition, a copula is also used to bind topics within NPs, from the document specific topic distribution. Both senLDA and copLDA implementations, can be found in https://github.com/ balikasg/topicModelling. In all models α and β play a symmetric role and are respectively fixed to 1/K, following (Asuncion et al., 2009). For copula based models, λ is set to 5, following (Balikas et al., 2016b). As already discussed, p is set to 0 for segLDAcopp=0; it is set to 0.5 for segLDAcop so as not to privilege a priori one topic distribution (document or segment specific) over the other. For sampling from Frank’s copula, we relied on the R copula package (Hofert and Maechler, 2011) 6. We chose L (the maximum length of a segment) using line search for L ∈[2, 5] and used L = 3 in all our experiments. Finally, to illustrate the behaviors of the different models with different number of topics, we present here the results obtained with K = 20 and K = 100. We now compare the different models along three main dimensions: perplexity, use of topic 6Our complete code will be available for research purposes. representations for classification and topic coherence. 4.1 Perplexity We first randomly split here all the collections, using 75% of them for training, and 25% for testing. In order to see how well the models fit the data and following (Blei et al., 2003), we first evaluated the methods in terms of perplexity defined as: Perplexity = exp −P d∈D P w∈d log PK k=1 θd kφk w P d∈D |d| ! , where d is a test document from the test set D, and |d| is the total number of words in d, and K is the total number of topics. The lower the perplexity is, the better the model fits the test data. Table 2 shows perplexities of different methods for K = 20 and K = 100 topics. 50 100 150 200 250 300 350 1,400 1,600 1,800 2,000 2,200 Iterations Perplexity NYT LDA senLDA copLDA segLDAcopp=0 segLDAcopλ=0 segLDAcop Figure 2: Perplexity with respect to training iteration on NYT collection (20 topics). From Table 2, it comes out that the best performing model in terms of perplexity over all datasets and for different number of topics is segLDAcop. Further, segLDAcopλ=0, that uses both document and segment specific topic distributions, performs better than segLDAcopp=0, which in turn outperforms copLDA, bringing evidence that using all possible segmentations rather than only NPs unit extracted using a chunker yields a more flexible and natural topic assignment. segLDAcop also converges faster than the other methods to its minimum as it is shown in Figure 2, depicting the evolution of perplexity of different models over the number of iterations on the NYT collection (a similar behavior is observed on the other collections). 1804 Models Wiki0 Wiki1 Wiki2 Pubmed Reuters NYT 20 100 20 100 20 100 20 100 20 100 20 100 LDA 853.7 370.9 1144.6 541.1 1225.2 570.6 1267.8 628.7 210.6 118.8 1600.1 1172.1 senLDA 958.4 420.5 1236.7 675.3 1253.1 625.2 1346.3 674.3 254.3 173.6 1735.9 1215.3 copLDA 753.1 264.3 954.1 411.5 1028.6 420.6 1031.5 483.2 206.3 101.3 1551.5 1063.2 segLDAcopp=0 670.2 235.4 904.2 382.4 975.7 409.2 985.5 459.3 194.2 96.7 1504.2 1033.2 segLDAcopλ=0 655.1 222.1 890.3 370.2 949.2 404.3 971.3 451.2 190.1 91.3 1474.6 1014.3 segLDAcop 621.2 213.5 861.2 358.6 934.7 394.4 960.4 442.1 182.1 87.5 1424.2 992.3 Table 2: Perplexity with respect to different number of topics (20 and 100). Models Wiki0 Wiki1 Wiki2 Pubmed Reuters 20 100 20 100 20 100 20 100 20 100 LDA 55.3 63.5 42.4 51.4 41.2 48.7 54.1 63.5 75.5 82.7 senLDA 41.4 53.2 33.5 44.5 36.4 40.9 50.2 62.5 69.4 74.2 copLDA 51.2 62.7 43.4 52.1 40.8 46.5 53.5 63.1 75.2 81.5 segLDAcopp=0 59.1 64.2 44.8 51.2 42.3 50.1 55.4 63.1 76.8 82.5 segLDAcopλ=0 61.1 67.4 46.5 53.8 44.1 52.2 57.1 65.2 79.6 84.4 segLDAcop 62.3 68.4 48.4 55.2 44.8 53.5 59.3 66.5 80.2 85.1 Table 3: MiF score (percent) with respect to different number of topics (20 and 100). 4.2 Topical induced representation for classification Some studies compare topic models using extrinsic tasks such as document classification. In this case, it is possible to reduce the dimensionality of the representation space by using the induced topics (Blei et al., 2003). In this study, we first randomly splitted the datasets, except NYT that does not contain class information, into training (75%) and test (25%) sets. We then applied SVMs with a linear kernel; the value of the hyperparameter C was found by cross-validation over the training set {0.01, 0.1, 1, 10, 100}. For datasets where certain documents have more than one label (Pubmed, Reuters), we used the one-versus-all approach for performing multi-label classification. In Table 3, we report the Micro F1 (MiF) score of different models on the test sets. Again, the best results are obtained with segLDAcop, followed by segLDAcopλ=0. This shows the importance of relying on both document and segment specific topic distributions. As conjectured before, our model is able to captures fine grained topic assignments within documents. In addition, all models relying on an inferred segmentation (segLDAcopp=0, segLDAcopλ=0, segLDAcop) outperform the models relying on fixed segmentations (sentences or NPs). This shows the importance of being able to discover flexible segmentations for assigning topics within documents. 4.3 Topic coherence Another common way to evaluate topic models is by examining how coherent the produced topics are. Doing this manually is a time consuming process and cannot scale. To overcome this limitation the task of automatically evaluating the coherence of topics produced by topic models received a lot of attention (Mimno et al., 2011). It has been found that scoring the topics using co-occurrence measures, such as the pointwise mutual information (PMI) between the top-words of a topic, correlates well with human judgments (Newman et al., 2010). For this purpose an external, large corpus is used as a meta-document where the PMI scores of pairs of words are estimated using a sliding window. As discussed above, calculating the co-occurrence measures requires selecting the top-N words of a topic and performing the manual or automatic evaluation. Hence, N is a hyper-parameter to be chosen and its value can impact the results. Very recently, Lau and Baldwin (2016) showed that N actually impacts the quality of the obtained results and, in particular, the correlation with human judgments. In their work, they found that aggregating the topic coherence scores over several topic cardinalities leads to a substantially more stable and robust evaluation. Following the findings of Lau and Baldwin (2016) and using (Newman et al., 2010)’s equation, we present in Figure 3 the topic coherence scores as measured by the Normalized Pointwise Mutual Information (NPMI) . Their values are in [1,1], where in the limit of -1 two words w1 and w2 never occur together, while in the limit of +1 they always occur together (complete co-occurrence). For the reported scores, we aggregate the topic coherence scores over three different topic cardinalities: N ∈{5, 10, 15}. segLDAcop model which 1805 Wiki0 Wiki1 Wiki2 Pubmed Reuters NYT 4 6 8 10 12 5.1 6 8 9 4.4 7.1 6.5 6.7 9.2 10.4 6.1 8.2 6.2 6.3 9.4 10.1 5.8 8.5 6.3 7.2 10.1 10.9 6.4 8.9 6 6.9 9.9 10.3 6 8.6 6.9 7.6 10.5 11.5 6.8 9.2 NPMI (%) LDA senLDA copLDA segLDAcopp=0 segLDAcopλ=0 segLDAcop Figure 3: Topic coherence (NPMI) score with respect to 100 of topics. uses copulas and segmentation together, shows the best score for the given reference meta-data (Wikipedia) in all of the datasets. It should be noted that segLDAcopλ=0 which has not copula binder inside the model has less improvement against the segLDAcopp=0 which has the copula. This means using copula has more effect on the topic coherence than only the segment-specific topic distribution. 4.4 Visualization In order to illustrate the results obtained by segLDAcop, we display in Figure 4 the top 10 most probable words over 5 topics (K = 20) for the Reuters dataset, for both segLDAcop and LDA. In segLDAcop, topic 1, the top-ranked words are mostly relevant to the topic “date” (e.g., Topic1 march, fell, rose, january, rise, year, fall, february, pct, week fell, mln, year, january, dlrs, rise, rose, pct, billion, february Topic2 currency, bank, pct, cut, rate, day, prime, exchange, interest, national billion, prime, day, rate, dlrs, pct, reserve, federal, fed, bank Topic3 term, agreement, acquire, buy, sell, unit, acquisition, corp, company, sale term, dlrs, buy, company, sell, unit, corp, acquisition, sale, mln Topic4 approved, american, common, split, merger, company, board, stock, share, shareholder acquire, mln, company, common, stock, shareholder, share, corp, merger, dlrs Topic5 tokyo, life, intent, letter, buy, insurance, yen, japan, dealer, dollar central, european, japan, yen, ec, dollar, bank, rate, dealer, market Figure 4: Top-10 words of segLDAcop (left) vs LDA (right) for the Reuters (5 out of 20 topics). Ralph Borsodi was an economics theorist and practical experimenter interested in ways of living Figure 5: Topic assignments with segmentation boundaries using segLDAcop. Colors are topics (examples from Wiki0 including stopwords with 20 topics). march, january, year, fall, february, week). However, a similar topic learned by LDA appears to involve less such words (year, january, february), indicating a less coherent topic. Figure 5 illustrates another aspect of our model, namely the possibility to detect topically coherent segments. In particular, as one can note, the sentence is segmented in six parts by our model, the first one is a NP, Ralph Borsodi where one single topic is assigned to both words. We observe a similar coherence in topic assignments on other NPs and segments, in which a single topic is used for the words involved. The data-driven approach we have adopted here can discover such fine grained differences, something the approaches based on fixed segmentations (either based on sentences or NPs), are less likely to achieve. 5 Discussion In this paper, we have introduced an LDA-based model that generates topically coherent segments within documents by jointly segmenting documents and assigning topics to their words. The coherence between topics is ensured through Frank’s copula, that binds the topics associated to the words of a segment. In addition, this model relies on both document and segment specific topic distributions so as to capture fine grained differences in topic assignments. We have shown that this model naturally encompasses other state-of-the-art LDA-based models proposed to accomplish the same task, and that it outperforms these models over six publicly available collections in terms of perplexity, Normalized Pointwise Mutual Information (NPMI), a measure used to assess the coherence of topics with documents, and the Micro F1-measure in a text classification context. Our results confirm the importance of a flexible segmentation as well as a 1806 binding mechanism to produce topically coherent segments. As regards complexity, it is true that more complex models, as the one we are considering, are more prone to underfitting (when data is scarce) and overfitting than simpler models. This said, the experimental results on perplexity (in which the word-topic distributions are fixed) and on classification (based on the topical induced representations) suggest that our model neither underfits nor overfits compared to simpler models. We believe that this is due to the fact that the main additional parameters in our model (the segment specific topic distribution) do not really add complexity as they are drawn from the same distribution as the standard document specific topics. Furthermore, the parameters p and f are simple parameters to choose between these two distributions. The comparison with other segmentation methods is also an important point. While state-of-theart supervised segmentation models can be used before applying the LDA model, we note such a pipeline approach comes with several limitations. The approach requires external annotated data to train the segmentation models, where certain domain and language specific information need to be captured. By contrast, our unsupervised approach learns both segmentations and topics jointly in a domain and language independent manner. Furthermore, existing supervised segmentation models are largely designed for a very different purpose with strong linguistic motivations, which may not align well with our main goal in this paper which is improving topic coherence in topic modeling. Similarly, unsupervised approaches, used for example in the TDT (Topic Detection and Tracking) campaigns or more recently in Du et al. (2013), usually consider coarse-grained topics, that can encompass several sentences. In contrast, our approach aims at identifying fine-grained topics associated with coherent segments that do not overlap sentence boundaries. These considerations, explain the choice of the baselines retained: they are based on segments of different granularities (words, NPs, sentences) that do not overlap sentence boundaries. In the future, we plan on relying on other inference approaches, based for example on variational Bayes known to yield better estimates for perplexity (Asuncion et al., 2009); it is however not certain that the gain in perplexity one can expect from the use of variational Bayes approaches will necessarily result in a gain in, say, topic coherence. Indeed, the impact of the inference approach on the different usages of latent topic models for text collections remains to be better understood. Acknowledgments We would like to thank the reviewers for their helpful comments. Most of this work was done when Hesam Amoualian was visiting Singapore University of Technology and Design. This work is supported by MOE Tier 1 grant SUTDT12015008, also partly supported by the LabEx PERSYVAL-Lab ANR-11-LABX-0025. References Hesam Amoualian, Marianne Clausel, Eric Gaussier, and Massih-Reza Amini. 2016. Streaming-lda: A copula-based approach to modeling topic dependencies in document streams. In Proceedings of the 22nd International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, SIGKDD, pages 695–704. https://doi.org/10.1145/2939672.2939781. Arthur Asuncion, Max Welling, Padhraic Smyth, and Yee Whye Teh. 2009. On smoothing and inference for topic models. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence. AUAI Press, Arlington, Virginia, United States, UAI, pages 27–34. http://dl.acm.org/citation.cfm?id=1795114.1795118. Georgios Balikas, Massih-Reza Amini, and Marianne Clausel. 2016a. On a topic model for sentences. In Proceedings of the 39th International Conference on Research and Development in Information Retrieval. ACM, New York, NY, USA, SIGIR, pages 921–924. https://doi.org/10.1145/2911451.2914714. Georgios Balikas, Hesam Amoualian, Marianne Clausel, Eric Gaussier, and Massih R Amini. 2016b. Modeling topic dependencies in semantically coherent text spans with copulas. In Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, COLING, pages 1767–1776. http://aclweb.org/anthology/C161166. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly, Beijing. http://www.nltk.org/book/. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning 3:993–1022. http://dl.acm.org/citation.cfm?id=944919.944937. Jordan Boyd-Graber and David Blei. 2008. Syntactic topic models. In Proceedings of the 1807 21st International Conference on Neural Information Processing Systems. Curran Associates Inc., USA, NIPS, pages 185–192. http://dl.acm.org/citation.cfm?id=2981780.2981804. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science 41(6):391– 407. http://dx.doi.org/10.1002/(SICI)10974571(199009)41:6<391::AID-ASI1>3.0.CO;2-9. Lan Du, Wray Buntine, and Huidong Jin. 2010. A Segmented Topic Model Based on the Two-parameter Poisson-Dirichlet Process. Journal of Machine learning 81(1):5–19. https://doi.org/10.1007/s10994-010-5197-4. Lan Du, Wray Buntine, and Mark Johnson. 2013. Topic Segmentation with a Structured Topic Model. In Proceedings of The Annual Conference of the North American Chapter of the Association for Computational Linguistics, Human Language Technologies. HLT-NAACL, pages 190–200. http://dblp.unitrier.de/db/conf/naacl/naacl2013.html/DuBJ13. Gal Elidan. 2013. Copulas in Machine Learning, Springer Berlin Heidelberg, Berlin, Heidelberg, pages 39–60. https://doi.org/10.1007/978-3-64235407-6_3. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. Journal of the National Academy of Sciences 101(suppl 1):5228–5235. https://doi.org/10.1073/pnas.0307752101. Thomas L Griffiths, Mark Steyvers, David M Blei, and Joshua B Tenenbaum. 2005. Integrating topics and syntax. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in the International Conference on Neural Information Processing Systems. MIT Press, NIPS, pages 537– 544. http://papers.nips.cc/paper/2587-integratingtopics-and-syntax.pdf. Marius Hofert and Martin Maechler. 2011. Nested Archimedean Copulas Meet R: The nacopula Package. Journal of Statistical Software 39(i09):–. https://doi.org/http://hdl.handle.net/10. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22Nd Annual International Conference on Research and Development in Information Retrieval. ACM, New York, NY, USA, SIGIR, pages 50–57. https://doi.org/10.1145/312624.312649. Jey Han Lau and Timothy Baldwin. 2016. The sensitivity of topic coherence evaluation to topic cardinality. In Proceedings of The Annual Conference of the North American Chapter of the Association for Computational Linguistics, Human Language Technologies, San Diego California, USA, June 12-17, 2016. NAACL, pages 483–487. http://aclweb.org/anthology/N/N16/N16-1057.pdf. Jey Han Lau, Timothy Baldwin, and David Newman. 2013. On collocations and topic models. Journal of ACM Trans. Speech Lang. Process. 10(3):10:1– 10:14. https://doi.org/10.1145/2483969.2483972. David Mimno and Andrew McCallum. 2007. Organizing the oca: Learning faceted subjects from a library of digital books. In Proceedings of the 7th Joint Conference on Digital Libraries. ACM, New York, NY, USA, JCDL ’07, pages 376–385. https://doi.org/10.1145/1255175.1255249. David Mimno, Hanna M. Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, EMNLP, pages 262–272. http://dl.acm.org/citation.cfm?id=2145432.2145462. Roger B. Nelsen. 2006. An Introduction to Copulas (Springer Series in Statistics). SpringerVerlag New York, Inc., Secaucus, NJ, USA. http://www.springer.com/gp/book/9780387286594. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Proceedings of The Annual Conference of the North American Chapter of the Association for Computational Linguistics, Human Language Technologies. Association for Computational Linguistics, Stroudsburg, PA, USA, NAACL, pages 100–108. http://dl.acm.org/citation.cfm?id=1857999.1858011. Ioannis Partalas, Aris Kosmopoulos, Nicolas Baskiotis, et al. 2015. LSHTC: A Benchmark for Large-Scale Text Classification. Journal of CoRR abs/1503.08581. http://arxiv.org/abs/1503.08581. Michael Paul and Roxana Girju. 2010. A twodimensional topic-aspect model for discovering multi-faceted topics. In Proceedings of the 24th Conference on Artificial Intelligence. AAAI Press, AAAI, pages 545–550. http://dl.acm.org/citation.cfm?id=2898607.2898695. Matthew Purver, Thomas L Griffiths, Konrad P. Körding, and Joshua B. Tenenbaum. 2006. Unsupervised topic modelling for multi-party spoken discourse. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL, pages 17– 24. https://doi.org/10.3115/1220175.1220178. Akihiro Tamura and Eiichiro Sumita. 2016. Bilingual segmented topic model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. ACL. http://aclweb.org/anthology/P/P16/P16-1120.pdf. 1808 Yee Whye Teh. 2006. A hierarchical bayesian language model based on pitman-yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL, pages 985–992. https://doi.org/10.3115/1220175.1220299. George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, et al. 2015. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. Journal of BMC Bioinformatics 16(1):138. https://doi.org/10.1186/s12859-015-0564-6. Dingding Wang, Shenghuo Zhu, Tao Li, and Yihong Gong. 2009. Multi-document summarization using sentence-based topic models. In Proceedings of the Conference on Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL-IJCNLP, pages 297–300. http://dl.acm.org/citation.cfm?id=1667583.1667675. Xuerui Wang, Andrew McCallum, and Xing Wei. 2007. Topical n-grams: Phrase and topic discovery, with an application to information retrieval. In Proceedings of the 7th International Conference on Data Mining. IEEE Computer Society, Washington, DC, USA, ICDM, pages 697–702. https://doi.org/10.1109/ICDM.2007.86. Xing Wei and W. Bruce Croft. 2006. Lda-based document models for ad-hoc retrieval. In Proceedings of the 29th Annual International Conference on Research and Development in Information Retrieval. ACM, New York, NY, USA, SIGIR, pages 178–185. https://doi.org/10.1145/1148170.1148204. Liang Yao, Yin Zhang, Baogang Wei, Lei Li, Fei Wu, Peng Zhang, and Yali Bian. 2016. Concept over time: the combination of probabilistic topic model with wikipedia knowledge. Journal of Expert Systems with Applications 60:27 – 38. https://doi.org/10.1016/j.eswa.2016.04.014. Hua-Jun Zeng, Qi-Cai He, Zheng Chen, Wei-Ying Ma, and Jinwen Ma. 2004. Learning to cluster web search results. In Proceedings of the 27th Annual International Conference on Research and Development in Information Retrieval. ACM, New York, NY, USA, SIGIR, pages 210–217. https://doi.org/10.1145/1008992.1009030. A Efficient segmentation Let us recall the property presented before: Proposition A.1. Let ls i be the random variable associated to the length of the segment starting at position i in a sentence of length M (positions go from 1 to M and ls i takes value in {1, · · · , L}). Then P(ls i = l) := g(M+1−i−l);L) g(M+1−i;L) defines a probability distribution over ls i . Furthermore, the following process is equivalent to choosing sentence segmentations uniformly from the set of possible segmentations. From pos. 1, repeat till end of sentence: (a) Generate segment length acc. to P; (b) Add segment to current segmentation; (c) Move to position after the segment. Proof Any segmentation of the sentence of length M starts with either a segment of length 1, a segment of length 2, · · · , or a segment of length L. Thus, g(M; L) can be defined through the following recurrence relation: g(M; L) = L X l=1 g(M −l; L) (1) together with the initial values g(1; L), g(2; L), · · · , g(L; L), which can be computed offline (for example, for L = 3, one has: g(1; 3) = 1, g(2; 3) = 2, g(3; 3) = 4). Note that g(1; L) = 1 for all L. Thus: L X l=1 P(ls i = l) = L X l=1 g(M + 1 −i −l); L) g(M + 1 −i; L) = 1 due to the recurrence relation on g. This proves the first part of the proposition. Using the process described above where segments are generated one after another according to P, for a segmentation S, comprising |S| segments, let us denote by l1, l2, · · · , l|S| the lengths of each segment and by i1, i2, · · · , i|S| the starting positions of each segment (with i1 = 1). One has, as segments are independent of each other: P(S) = |S| Y j=1 P(ls ij = lj) = |S| Y j=1 g(M + 1 −(ij + lj); L) g(M + 1 −ij; L) = g(M −l1; L) g(M; L) g(M −l1 −l2; L) g(M −l1; L) · · · = 1 g(M; L) as g(1; L) = 1. This concludes the proof of the proposition. 2 Furthermore, as one can note from Eq. 1, the various elements needed to compute P(ls i = l) can be efficiently computed, the time complexity being equal to O(M). In addition, as the number of different sentence lengths is limited, one can store the values of g to reuse them during the segmentation phase. 1809
2017
165
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1810–1820 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1166 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1810–1820 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1166 Jointly Extracting Relations with Class Ties via Effective Deep Ranking Hai Ye1, Wenhan Chao1, Zhunchen Luo2∗, Zhoujun Li1 1School of Computer Science and Engineering, Beihang University, Beijing 100191, China {yehai, chaowenhan, lizj}@buaa.edu.cn 2China Defense Science and Technology Information Center, Beijing 100142, China [email protected] Abstract Connections between relations in relation extraction, which we call class ties, are common. In distantly supervised scenario, one entity tuple may have multiple relation facts. Exploiting class ties between relations of one entity tuple will be promising for distantly supervised relation extraction. However, previous models are not effective or ignore to model this property. In this work, to effectively leverage class ties, we propose to make joint relation extraction with a unified model that integrates convolutional neural network (CNN) with a general pairwise ranking framework, in which three novel ranking loss functions are introduced. Additionally, an effective method is presented to relieve the severe class imbalance problem from NR (not relation) for model training. Experiments on a widely used dataset show that leveraging class ties will enhance extraction and demonstrate the effectiveness of our model to learn class ties. Our model outperforms the baselines significantly, achieving stateof-the-art performance. 1 Introduction Relation extraction (RE) aims to classify the relations between two given named entities from natural-language text. Supervised machine learning methods require numerous labeled data to work well. With the rapid growth of volume of relation types, traditional methods can not keep up with the step for the limitation of labeled data. In order to narrow down the gap of data sparsity, Mintz et al. (2009) propose distant supervision (DS) for relation extraction, which automati∗Corresponding author. place lived (Patsy Ramsey, Atlanta) place of birth (Patsy Ramsey, Atlanta) Sentence Latent Label #1 Patsy Ramsey has been living in Atlanta since she was born. place of birth #2 Patsy Ramsy always loves Atlanta since it is her hometown. place lived Table 1: Training instances generated by freebase. cally generates training data by aligning a knowledge facts database (ie. Freebase (Bollacker et al., 2008)) with texts. Class ties mean the connections between relations in relation extraction. In general, we conclude that class ties can have two types: weak class ties and strong class ties. Weak class ties mainly involve the co-occurrence of relations such as place of birth and place lived, CEO of and founder of. On the contrary, strong class ties mean that relations have latent logical entailments. Take the two relations of capital of and city of for example, if one entity tuple has the relation of capital of, it must express the relation fact of city of, because the two relations have the entailment of capital of ⇒city of. Obviously the opposite induction is not correct. Further take the sentence of “Jonbenet told me that her mother [Patsy Ramsey]e1 never left [Atlanta]e2 since she was born.” in DS scenario for example. This sentence expresses two relation facts which are place of birth and place lived. However, the word “born” is a strong bios to extract place of birth, so it may not be easy to predict the relation of place lived, but if we can incorporate the weak ties between the two relations, extracting place of birth will provide evidence for prediction of place lived. Exploiting class ties is necessary for DS based relation extraction. In DS scenario, there is a challenge that one entity tuple can have multiple rela1810 tion facts as shown in Table 1, which is called relation overlapping (Hoffmann et al., 2011; Surdeanu et al., 2012). However, the relations of one entity tuple can have class ties mentioned above which can be leveraged to enhance relation extraction for it narrowing down potential searching spaces and reducing uncertainties between relations when predicting unknown relations. If one pair entities has CEO of, it will contain founder of with high possibility. To exploit class ties between relations, we propose to make joint extraction for all positive labels of one entity tuple with considering pairwise connections between positive and negative labels inspired by (F¨urnkranz et al., 2008; Zhang and Zhou, 2006). As the two relations with class ties shown in Table 1, by joint extraction of two relations, we can maintain the class ties (co-occurrence) of them from training samples to be learned by potential model, and then leverage this learned information to extract instances with unknown relations, which can not be achieved by separated extraction for it dividing labels apart losing information of cooccurrence. To classify positive labels from negative ones, we adopt pairwise ranking to rank positive ones higher than negative ones, exploiting pairwise connections between them. In a word, joint extraction exploits class ties between relations and pairwise ranking classify positive labels from negative ones. Furthermore, combining information across sentences will be more appropriate for joint extraction which provides more information from other sentences to extract each relation (Zheng et al., 2016; Lin et al., 2016). In Table 1, sentence #1 is the evidence for place of birth, but it also expresses the meaning of “living in someplace”, so it can be aggregated with sentence #2 to extract place lived. Meanwhile, the word of “hometown” in sentence #2 can provide evidence for place of birth which should be combined with sentence #1 to extract place of birth. In this work, we propose a unified model that integrates pairwise ranking with CNN to exploit class ties. Inspired by the effectiveness of deep learning for modeling sentences (LeCun et al., 2015), we use CNN to encode sentences. Similar to (Santos et al., 2015; Lin et al., 2016), we use class embeddings to represent relation classes. The whole model architecture is presented in Figure 1. We first use CNN to embed sentences, then we introduce two variant methods to combine the x2 x1 xn s1 s2 sn s c1 c2 cm 𝑊[#$] & 𝑠 class embedding encoded by CNN sentence embedding bag representation vector combine sentences Figure 1: The main architecture of our model. embedded sentences into one bag representation vector aiming to aggregate information across sentences, after that we measure the similarity between bag representation and relation class in realvalued space. With two variants for combining sentences, three novel pairwise ranking loss functions are proposed to make joint extraction. Besides, to relieve the bad impact of class imbalance from NR (not relation) (Japkowicz and Stephen, 2002) for training our model, we cut down loss propagation from NR class during training. Our experimental results on dataset of Riedel et al. (2010) are evident that: (1) Our model is much more effective than the baselines; (2) Leveraging class ties will enhance relation extraction and our model is efficient to learn class ties by joint extraction; (3) A much better model can be trained after relieving class imbalance from NR. Our contributions in this paper can be encapsulated as follows: • We propose to leverage class ties to enhance relation extraction. An effective deep ranking model which integrates CNN and pairwise ranking framework is introduced to exploit class ties. • We propose an effective method to relieve the impact of data imbalance from NR for model training. • Our method achieves state-of-the-art performance. 2 Related Work We summarize related works on two main aspects: 2.1 Distant Supervision Relation Extraction Previous works on DS based RE ignore or are not effective to leverage class ties between rela1811 tions. Riedel et al. (2010) introduce multi-instance learning to relieve the wrong labelling problem, ignoring class ties. Afterwards, Hoffmann et al. (2011) and Surdeanu et al. (2012) model this problem by multi-instance multi-label learning to extract overlapping relations. Though they also propose to make joint extraction of relations, they only use information from single sentence losing information from other sentences. Han and Sun (2016) try to use Markov logic model to capture consistency between relation labels, on the contrary, our model leverages deep ranking to learn class ties automatically. With the remarkable success of deep learning in CV and NLP (LeCun et al., 2015), deep learning has been applied to relation extraction (Zeng et al., 2014, 2015; Santos et al., 2015; Lin et al., 2016), the specific deep learning architecture can be CNN (Zeng et al., 2014), RNN (Zhou et al., 2016), etc. Zeng et al. (2015) propose a piecewise convolutional neural network with multi-instance learning for DS based relation extraction, which improves the precision and recall significantly. Afterwards, Lin et al. (2016) introduce the mechanism of attention (Luong et al., 2015; Bahdanau et al., 2014) to select the sentences to relieve the wrong labelling problem and use all the information across sentences. However, the two deep learning based models only make separated extraction thus can not model class ties between relations. 2.2 Deep Learning to Rank Deep learning to rank has been widely used in many problems to serve as a classification model. In image retrieval, Zhao et al. (2015) apply deep semantic ranking for multi-label image retrieval. In text matching, Severyn and Moschitti (2015) adopt learning to rank combined with deep CNN for short text pairs matching. In traditional supervised relation extraction, Santos et al. (2015) design a pairwise loss function based on CNN for single label relation extraction. Based on the advantage of deep learning to rank, we propose pairwise learning to rank (LTR) (Liu, 2009) combined with CNN in our model aiming to jointly extract multiple relations. 3 Proposed Model In this section, we first conclude the notations used in this paper, then we introduce the used CNN for sentence embedding, afterwards, we present our algorithm of how to learn class ties between relations of one entity tuple. 3.1 Notation We define the relation classes as L = {1, 2, · · · , C}, entity tuples as T = {ti}M i=1 and mentions1 as X = {xi}N i=1. Dataset is constructed as follows: for entity tuple ti ∈T and its relation class set Li ⊆L, we collect all the mentions Xi that contain ti, the dataset we use is D = {(ti, Li, Xi)}H i=1. Given a data (tk, Lk, Xk) ∈ {(ti, Li, Xi)}H i=1, the sentence embeddings of Xk encoded by CNN are defined as Sk = {si}|Xk| i=1 and we use class embeddings W ∈R|L|×d to represent the relation classes. 3.2 CNN for Sentence Embedding We take the effective CNN architecture adopted from (Zeng et al., 2015; Lin et al., 2016) to encode sentence and we briefly introduce CNN in this section. More details of our CNN can be obtained from previous work. 3.2.1 Words Representations • Word Embedding Given a word embedding matrix V ∈ Rlw×d1 where lw is the size of word dictionary and d1 is the dimension of word embedding, the words of a mention x = {w1, w2, · · · , wn} will be represented by realvalued vectors from V . • Position Embedding The position embedding of a word measures the distance from the word to entities in a mention. We add position embeddings into words representations by appending position embedding to word embedding for every word. Given a position embedding matrix P ∈Rlp×d2 where lp is the number of distances and d2 is the dimension of position embeddings, the dimension of words representations becomes dw = d1 + d2 × 2. 3.2.2 Convolution, Piecewise max-pooling After transforming words in x to real-valued vectors, we get the sentence q ∈Rn×dw. The set of kernels K is {Ki}ds i=1 where ds is the number of kernels. Define the window size as dwin and given one kernel Kk ∈Rdwin×dw, the convolution operation is defined as follows: m[i] = q[i:i+dwin−1] ⊙Kk + b[k] (1) 1The sentence containing one certain entity is called mention. 1812 where m is the vector after conducting convolution along q for n −dwin + 1 times and b ∈Rds is the bias vector. For these vectors whose indexes out of range of [1, n], we replace them with zero vectors. By piecewise max-pooling, when pooling, the sentence is divided into three parts: m[p0:p1], m[p1:p2] and m[p2:p3] (p1 and p2 are the positions of entities, p0 is the beginning of sentence and p3 is the end of sentence). This piecewise max-pooling is defined as follows: z[j] = max(m[pj−1:pj]) (2) where z ∈R3 is the result of mention x processed by kernel Kk; 1 ≤j ≤3. Given the set of kernels K, following the above steps, the mention x can be embedded to o where o ∈Rds∗3. 3.2.3 Non-Linear Layer, Regularization To learn high-level features of mentions, we apply a non-linear layer after pooling layer. After that, a dropout layer is applied to prevent overfitting. We define the final fixed sentence representation as s ∈Rdf (df = ds ∗3). s = g(o) ◦h (3) where g(·) is a non-linear function and we use tanh(·) in this paper; h is a Bernoulli random vector with probability p to be 1. 3.3 Learning Class Ties by Joint Extraction with Pairwise Ranking As mentioned above, to learn class ties, we propose to make joint extraction with considering pairwise connections between positive labels and negative ones. Pairwise ranking is applied to achieve this goal. Besides, combining information across sentences is necessary for joint extraction. More specifically, as shown in Figure 2, from down to top, all information from sentences is pre-propagated to provide enough information for joint extraction. From top to down, pairwise ranking jointly extracting positive relations by combining losses, which are back-propagated to CNN to learn class ties. 3.3.1 Combining Information across Sentences We propose two options to combine sentences to provide enough information for joint extraction. 1 2 x1 x2 xn c1 c2 cm s Class Ties Combine information from all sentences Joint extraction by combining losses Figure 2: Illustration of mechanism of our model to model class ties between relations. • AVE The first option is average method. This method regards all the sentences equally and directly average the values in all dimensions of sentence embedding. This AVE function is defined as follows: s = 1 n X si∈Sk si (4) where n is the number of sentences and s is the representation vector combining all sentence embeddings. Because it weights the importance of sentences equally, this method may bring much noise data from two aspects: (1) the wrong labelling data; (2) irrelated mentions for one relation class, for all sentences containing the same entity tuple being combined together to construct the bag representation. • ATT The second one is a sentence-level attention algorithm used by Lin et al. (2016) to measure the importance of sentences aiming to relieve the wrong labelling problem. For every sentence, ATT will calculate a weight by comparing the sentence to one relation. We first calculate the similarity between one sentence embedding and relation class as follows: ej = a · W[c] · sj (5) where ej is the similarity between sentence embedding sj and relation class c and a is a bias factor. In this paper, we set a as 0.5. Then we apply Softmax to rescale e (e = {ei}|Xk| i=1 ) to [0, 1]. We get the weight αj for sj as follows: αj = exp(ej) P ei∈e exp(ei) (6) so the function to merge s with ATT is as follows: 1813 s = |Xk| X i=1 αi · si (7) 3.3.2 Joint Extraction by Combining Losses to Learn Class Ties Firstly, we have to present the score function to measure the similarity between s and relation c. • Score Function We use dot function to produce score for s to be predicted as relation c. The score function is as follows: F(s, c) = W[c] · s (8) There are other options for score function. In Wang et al. (2016), they propose a margin based loss function that measures the similarity between s and W[c] by distance. Because score function is not an important issue in our model, we adopt dot function, also used by Santos et al. (2015) and Lin et al. (2016), as our score function. Now we start to introduce the ranking loss function. Pairwise ranking aims to learn the score function F(s, c) that ranks positive classes higher than negative ones. This goal can be summarized as follows: ∀c+ ∈Lk, ∀c−∈L−Lk : F(s, c+) > F(s, c−)+β (9) where β is a margin factor which controls the minimum margin between the positive scores and negative scores. To learn class ties between relations, we extend the formula (9) to make joint extraction and we propose three ranking loss functions with variants of combining sentences. Followings are the proposed loss functions: • with AVE (Variant-1) We define the marginbased loss function with option of AVE to aggregate sentences as follows: G[ave] = X c+∈Lk ρ[0, σ+ −F(s, c+)]+ +ρ|Lk|[0, σ−+ F(s, c−)]+ (10) where [0, · ]+ = max(0, · ); ρ is the rescale factor, σ+ is positive margin and σ−is negative margin. Similar to Santos et al. (2015) and Wang et al. (2016), this loss function is designed to rank positive classes higher than negative ones controlled by the margin of σ+ −σ−. In reality, F(s, c+) will be higher than σ+ and F(s, c−) will be lower than σ−. In our work, we set ρ as 2, σ+ as 2.5 and σ−as 0.5 adopted from Santos et al. (2015). Similar to Weston et al. (2011) and Santos et al. (2015), we update one negative class at every training round but to balance the loss between positive classes and negative ones, we multiply |Lk| before the right term in function (10) to expand the negative loss. We apply mini-bach based stochastic gradient descent (SGD) to minimize the loss function. The negative class is chosen as the one with highest score among all negative classes (Santos et al., 2015), i.e.: c−= argmax c∈L−Lk F(s, c) (11) • with ATT (Variant-2) Now we define the loss function for the option of ATT to combine sentences as follows: G[att] = X c+∈Lk (ρ[0, σ+ −F(sc+, c+)]+ +ρ[0, σ−+ F(sc+, c−)]+) (12) where sc means the attention weights of representation s are merged by comparing sentence embeddings with relation class c and c−is chosen by the following function: c−= argmax c∈L−Lk F(sc+, c) (13) which means we update one negative class in every training round. We keep the values of ρ, σ+ and σ−same as values in function (10). According to this loss function, we can see that: for each class c+ ∈Lk, it will capture the most related information from sentences to merge sc+, then rank F(sc+, c+) higher than all negative scores which each is F(sc+, c−) (c−∈L −Lk). We use the same update algorithm to minimize this loss. • Extended with ATT (Variant-3) According to function (12), for each c+, we only select one negative class to update the parameters, which only considers the connections between positive classes and negative ones, ignoring connections between positive classes, so we extend function (12) to better exploit class ties by considering the connections between positive classes. We give out the extended loss function as follows: G[Exatt] = X c∗∈Lk ( X c+∈Lk ρ[0, σ+ −F(sc∗, c+)]+ +ρ[0, σ−+ F(sc∗, c−)]+) (14) 1814 Pro. Training Test SemE. 17.63% 16.71% Riedel 72.52% 96.26% Table 2: The proportions of NR samples from SemEval-2010 Task 8 dataset and Riedel dataset. Similar to function (13), we select c−as follows: c−= argmax c∈L−Lk F(sc∗, c) (15) and we use the same method to update this loss function as discussed above. From the function (14), we can see that: for c∗∈Lk, after merging the bag representation s with c∗, we share s with all the other positive classes and update the class embeddings of other positive classes with s, in this way, the connections between positive classes can be captured and learned by our model. In loss function (10), (12) and (14), we combine losses from all positive labels to make joint extraction to capture the class ties among relations. Suppose we make separated extraction, the losses from positive labels will be divided apart and will not get enough information of connections between positive labels, comparing to joint extraction. Connections between positive labels and negative ones are exploited by controlling margins: σ+ and σ−. 3.4 Relieving Impact of NR In relation extraction, the dataset will always contain certain negative samples which do not express relations classified as NR (not relation). Table 2 presents the proportion of NR samples in SemEval-2010 Task 8 dataset2 (Erk and Strapparava, 2010) and dataset from Riedel et al. (2010), which shows almost data is about NR in the latter dataset. Data imbalance will severely affect the model training and cause the model only sensitive to classes with high proportion (He and Garcia, 2009). In order to relieve the impact of NR in DS based relation extraction, we cut the propagation of loss from NR, which means if relation c is NR, we set its loss as 0. Our method is similar to Santos et al. (2015) with slight variance. Santos et al. (2015) directly omit the NR class embedding, but we keep it. If we use ATT method to combine information across sentences, we can not omit NR class 2This is a dataset for relation extraction in traditional supervision framework. Algorithm 1: Merging loss function of Variant-3 input : L, (tk, Lk, Xk) and Sk; output: G[Exatt]; 1 G[Exatt] ←0; 2 for c∗∈Lk do 3 Merge representation sc∗by function (5), (6), (7); 4 for c+ ∈Lk do 5 if c+ is not NR then 6 G[Exatt] ←G[Exatt] + ρ[0, σ+ − F(sc∗, c+)]+; 7 c−←argmaxc∈L−Lk F(sc∗, c); 8 G[Exatt] ← G[Exatt] + ρ[0, σ−+ F(sc∗, c−)]+; 9 return G[Exatt]; embedding according to function (6) and (7), on the contrary, it will be updated from the negative classes’ loss. In Algorithm 1, we give out the pseudocodes of merging loss with Variant-3 and considering to relieve the impact of NR. 4 Experiments 4.1 Dataset and Evaluation Criteria We conduct our experiments on a widely used dataset, developed by Riedel et al. (2010) and has been used by Hoffmann et al. (2011), Surdeanu et al. (2012), Zeng et al. (2015) and Lin et al. (2016). The dataset aligns Freebase relation facts with the New York Times corpus, in which training mentions are from 2005-2006 corpus and test mentions from 2007. Following Mintz et al. (2009), we adopt heldout evaluation framework in all experiments. Aggregated precision/recall curves are drawn and precision@N (P@N) is reported to illustrate the model performance. 4.2 Experimental Settings Word Embeddings. We use a word2vec tool that is gensim3 to train word embeddings on NYT corpus. Similar to Lin et al. (2016), we keep the words that appear more than 100 times to construct word dictionary and use “UNK” to represent the other ones. 3http://radimrehurek.com/gensim/models/word2vec.html 1815 Parameter Name Symbol Value Window size dwin 3 Sentence. emb. dim. df 690 Word. emb. dim. d1 50 Position. emb. dim. d2 5 Batch size B 160 Learning rate λ 0.03 Dropout pos. p 0.5 Table 3: Hyper-parameter settings. Hyper-parameter Settings. Three-fold validation on the training dataset is adopted to tune the parameters following Surdeanu et al. (2012). We use grid search to determine the optimal hyperparameters. We select word embedding size from {50, 100, 150, 200, 250, 300}. Batch size is tuned from {80, 160, 320, 640}. We determine learning rate among {0.01, 0.02, 0.03, 0.04}. The window size of convolution is tuned from {1, 3, 5}. We keep other hyper-parameters same as Zeng et al. (2015): the number of kernels is 230, position embedding size is 5 and dropout rate is 0.5. Table 3 shows the detailed parameter settings. 4.3 Comparisons with Baselines Baseline. We compare our model with the following baselines: • Mintz (Mintz et al., 2009) the original distantly supervised model. • MultiR (Hoffmann et al., 2011) a multiinstance learning based graphical model which aims to address overlapping relation problem. • MIML (Surdeanu et al., 2012) also solving overlapping relations in a multi-instance multilabel framework. • PCNN+ATT (Lin et al., 2016) the state-ofthe-art model in dataset of Riedel et al. (2010) which applies ATT to combine the sentences. Results and Discussion. We compare our three variants of loss functions with the baselines and the results are shown in Figure 3. From the results we can see that: (1) Rank + AVE (Variant1) achieves comparable results with PCNN+ATT; (2) Rank + ATT (Variant-2) and Rank + ExATT (Variant-3) significantly outperform PCNN + ATT with much higher precision and slightly higher recall in whole view; (3) Rank + ExATT (Variant-3) exhibits the best performances comparing with all the other methods including PCNN + ATT, Rank + AVE and Rank + ATT. Recall 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Precision 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Mintz MultiR MIML PCNN+ATT Rank+AVE Rank+ATT Rank+ExATT Figure 3: Performance comparison of our model and the baselines. Recall 0 0.1 0.2 0.3 0.4 Precision 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank+AVE+Sep. Rank+AVE+Joint Recall 0 0.1 0.2 0.3 0.4 Precision 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank+ATT+Sep. Rank+ATT+Joint Recall 0 0.1 0.2 0.3 0.4 Precision 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank+ExATT+Sep. Rank+ExATT+Joint Figure 4: Results for impact of joint extraction and class ties with methods of Rank + AVE, Rank + ATT and Rank + ExATT under the setting of relieving impact of NR. 4.4 Impact of Joint Extraction and Class Ties In this section, we conduct experiments to reveal the effectiveness of our model to learn class ties with three variant loss functions mentioned above, and the impact of class ties for relation extraction. As mentioned above, we make joint extraction to learn class ties, so to achieve the goal of this set of experiments, we compare joint extraction with separated extraction. To make separated extraction, we divide the labels of entity tuple into single label and for one relation label we only select the sentences expressing this relation, then we use this dataset to train our model with the three variant loss functions. We conduct experiments with Rank + AVE (Variant-1), Rank + ATT (Variant-2) and Rank + ExATT (Variant3) relieving impact of NR. Aggregated P/R curves are drawn and precisions@N (100, 200, · · · , 500) are reported to show the model performances. 1816 P@N(%) 100 200 300 400 500 Ave. R.+AVE+J. 81.3 76.4 74.6 69.6 66.0 73.6 R.+AVE+S. 82.4 79.6 74.6 74.4 69.9 76.2 R.+ATT+J. 87.9 84.3 78.0 74.9 70.3 79.1 R.+ATT+S. 82.4 79.1 75.9 71.9 69.5 75.7 R.+ExATT+J. 83.5 82.2 78.7 77.2 73.1 79.0 R.+ExATT+S. 82.4 82.7 79.4 74.2 69.2 77.6 Table 4: Precisions for top 100, 200, 300, 400, 500 and average of them for impact of joint extraction and class ties. Recall 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Precision 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank+AVE Rank+ATT Rank+ExATT Figure 5: Results for comparisons of variant joint extractions. Experimental results are shown in Figure 4 and Table 4. From the results we can see that: (1) For Rank + ATT and Rank + ExATT, joint extraction exhibits better performance than separated extraction, which demonstrates class ties will improve relation extraction and the two methods are effective to learn class ties; (2) For Rank + AVE, surprisingly joint extraction does not keep up with separated extraction. For the second phenomenon, the explanation may lie in the AVE method to aggregate sentences will incorporate noise data consistent with the finding in Lin et al. (2016). When make joint extraction, we will combine all sentences containing the same entity tuple no matter which class type is expressed, so it will engender much noise if we only combine them equally. 4.5 Comparisons of Variant Joint Extractions To make joint extraction, we have proposed three variant loss functions including Rank + AVE, Rank + ATT and Rank + ExATT in the above discussion and Figure 3 shows that the three variants achieve different performances. In this experiment, we aim to compare the three variants in detail. We conduct the experiments with the three variants under the setting of relieving imP@N(%) 100 200 300 400 500 Ave. R.+AVE 81.3 76.4 74.6 69.6 66.0 73.6 R.+ATT 87.9 84.3 78.0 74.9 70.3 79.1 R.+ExATT 83.5 82.2 78.7 77.2 73.1 79.0 Table 5: Precisions for top 100, 200, 300, 400, 500 and average of them for Rank + AVE, Rank + ATT and Rank + ExATT. Recall 0 0.1 0.2 0.3 0.4 Precision 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank+AVE+NR Rank+AVE Recall 0 0.1 0.2 0.3 0.4 Precision 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank+AVE+NR Rank+AVE Recall 0 0.1 0.2 0.3 0.4 Precision 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank+ExATT+NR Rank+ExATT Figure 6: Results for impact of relation NR with methods of Rank + AVE, Rank + ATT and Rank + ExATT. “+NR” means not relieving impact of NR. pact of NR and joint extraction. We draw the P/R curves and report the top N (100, 200, · · · , 500) precisions to compare model performance with the three variants. From the results as shown in Figure 5 and Table 5 we can see that: (1) Comparing Rank + AVE with Rank + ATT, from the whole view, they can achieve the similar maximal recall point, but Rank + ATT exhibits higher precision in all range of recall; (2) Comparing Rank + ATT with Rank + ExATT, Rank + ExATT achieves much better performance with broader range of recall and higher precision in almost range of recall. 4.6 Impact of NR Relation The goal of this experiment is to inspect how much relation of NR can affect the model performance. We use Rank + AVE, Rank + ATT, Rank + ExATT under the setting of relieving impact of NR or not to conduct experiments. We draw the aggregated P/R curves as shown in Figure 6, from which we can see that after relieving the impact of NR, the model performance can be improved significantly. Then we further evaluate the impact of NR for convergence behavior of our model in model train1817 n-epoch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 F-measure 0.2 0.25 0.3 0.35 0.4 0.45 AVE+NR AVE-NR n-epoch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 F-measure 0.2 0.25 0.3 0.35 0.4 0.45 ATT+NR ATT-NR n-epoch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 F-measure 0.2 0.25 0.3 0.35 0.4 0.45 ExATT+NR ExATT-NR Figure 7: Impact of NR for model convergence. “+NR” means not relieving NR impact; “-NR” is opposite. ing. Also with the three variant loss functions, in each iteration, we record the maximal value of Fmeasure 4 to represent the model performance at current epoch. Model parameters are tuned for 15 times and the convergence curves are shown in Figure 7. From the result, we can find out: “+NR” converges quicker than “-NR” and arrives to the final score at the around 11 or 12 epoch. In general, “-NR” converges more smoothly and will achieve better performance than “+NR” in the end. 4.7 Case Study Joint vs. Sep. Extraction (Class Ties). We randomly select an entity tuple (Cuyahoga County, Cleveland) from test set to see its scores for every relation class with the method of Rank + ATT under the setting of relieving impact of NR with joint extraction and separated extraction. This entity tuple have two relations: /location/./county seat and /location/./contains, which derive from the same root class and they have weak class ties for they all relating to topic of “location”. We rescale the scores by adding value 10. The results are shown in Figure 8, from which we can see that: under joint extraction setting, the two gold relations have the highest scores among the other relations but under separated extraction setting, only /location/./contains can be distinguished from the negative relations, which demonstrates that joint extraction is better than separated extraction by capturing the class ties between relations. 4F = 2 ∗P ∗R/(P + R) class-id 5 10 15 20 25 30 35 40 45 50 value 6 8 10 12 14 16 18 20 Joint class-id 5 10 15 20 25 30 35 40 45 50 value 6 8 10 12 14 15 Sep. /l./l./contains /l./us./county-seat /l./us./county-seat /l./l./contains Figure 8: The output scores for every relation with method of Rank + ATT. The top is under joint extraction setting; the bottom is under separated extraction. 5 Conclusion and Future Works In this paper, we leverage class ties to enhance relation extraction by joint extraction using pairwise ranking combined with CNN. An effective method is proposed to relieve the impact of NR for model training. Experimental results on a widely used dataset show that leveraging class ties will enhance relation extraction and our model is effective to learn class ties. Our method significantly outperforms the baselines. In the future, we will focus on two aspects: (1) Our method in this paper considers pairwise intersections between labels, so to better exploit class ties, we will extend our method to exploit all other labels’ influences on each relation for relation extraction, transferring second-order to high-order (Zhang and Zhou, 2014); (2) We will focus on other problems by leveraging class ties between labels, specially on multi-label learning problems (Zhou et al., 2012) such as multi-category text categorization (Rousu et al., 2005) and multi-label image categorization (Zha et al., 2008). Acknowledgments Firstly, we would like to thank Xianpei Han and Kang Liu for their valuable suggestions on the initial version of this paper, which have helped a lot to improve the paper. Secondly, we also want to express gratitudes to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future. This work was supported by the National High-tech Research and Development Program (863 Program) (No. 2014AA015105) and National Natural Science Foundation of China (No. 61602490). 1818 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Kurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of KDD. pages 1247–1250. Katrin Erk and Carlo Strapparava, editors. 2010. Proceedings of SemEval. The Association for Computer Linguistics. Johannes F¨urnkranz, Eyke H¨ullermeier, Eneldo Loza Menc´ıa, and Klaus Brinker. 2008. Multilabel classification via calibrated label ranking. Machine learning 73(2):133–153. Xianpei Han and Le Sun. 2016. Global distant supervision for relation extraction. In Proceedings of AAAI. pages 2950–2956. Haibo He and Edwardo A. Garcia. 2009. Learning from imbalanced data. IEEE Trans. Knowl. Data Eng. 21(9):1263–1284. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of ACLHLT. Association for Computational Linguistics, pages 541–550. Nathalie Japkowicz and Shaju Stephen. 2002. The class imbalance problem: A systematic study. Intelligent data analysis 6(5):429–449. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521(7553):436–444. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of ACL. volume 1, pages 2124–2133. Tie-Yan Liu. 2009. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval 3(3):225–331. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP. pages 1412–1421. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of ACL-IJCNLP. Association for Computational Linguistics, pages 1003–1011. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of ECML-PKDD. Springer, pages 148–163. Juho Rousu, Craig Saunders, Sandor Szedmak, and John Shawe-Taylor. 2005. Learning hierarchical multi-category text classification models. In Proceeding of ICML. ACM, pages 744–751. Cicero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceeding of ACL. pages 626–634. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, pages 373–382. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of EMNLP. Association for Computational Linguistics, pages 455–465. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention cnns. In Proceedings of ACL, Volume 1: Long Papers. Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. WSABIE: scaling up to large vocabulary image annotation. In Proceedings of IJCAI. pages 2764–2770. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of EMNLP. pages 17–21. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. In Proceeding of COLING. pages 2335–2344. Zheng-Jun Zha, Xian-Sheng Hua, Tao Mei, Jingdong Wang, Guo-Jun Qi, and Zengfu Wang. 2008. Joint multi-label multi-instance learning for image classification. In CVPR. IEEE, pages 1–8. Min-Ling Zhang and Zhi-Hua Zhou. 2006. Multilabel neural networks with applications to functional genomics and text categorization. IEEE transactions on Knowledge and Data Engineering 18(10):1338– 1351. Min-Ling Zhang and Zhi-Hua Zhou. 2014. A review on multi-label learning algorithms. IEEE transactions on knowledge and data engineering 26(8):1819–1837. Fang Zhao, Yongzhen Huang, Liang Wang, and Tieniu Tan. 2015. Deep semantic ranking based hashing for multi-label image retrieval. In Proceedings of CVPR. pages 1556–1564. 1819 Hao Zheng, Zhoujun Li, Senzhang Wang, Zhao Yan, and Jianshe Zhou. 2016. Aggregating inter-sentence information to enhance relation extraction. In Thirtieth AAAI Conference on Artificial Intelligence. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attentionbased bidirectional long short-term memory networks for relation classification. In Proceeding of ACL. page 207. Zhi-Hua Zhou, Min-Ling Zhang, Sheng-Jun Huang, and Yu-Feng Li. 2012. Multi-instance multi-label learning. Artificial Intelligence 176(1):2291–2320. 1820
2017
166
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1821–1831 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1167 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1821–1831 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1167 Search-based Neural Structured Learning for Sequential Question Answering Mohit Iyyer∗ Department of Computer Science and UMIACS University of Maryland, College Park [email protected] Wen-tau Yih, Ming-Wei Chang Microsoft Research Redmond, WA 98052 {scottyih,minchang}@microsoft.com Abstract Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semistructured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions. 1 Introduction Semantic parsing, which maps natural language text to meaning representations in formal logic, has emerged as a key technical component for building question answering systems (Liang, 2016). Once a natural language question has been mapped to a formal query, its answer can be retrieved by executing the query on a back-end structured database. One of the main focuses of semantic parsing research is how to address compositionality in language, and complicated questions have been specifically targeted in the design of a recently-released QA dataset (Pasupat and Liang, 2015). Take for example the following question: “of those actresses who won a Tony after 1960, which one took the most amount of years after winning the Tony to ∗Work done during an internship at Microsoft Research win an Oscar?” The corresponding logical form is highly compositional; in order to answer it, many sub-questions must be implicitly answered in the process (e.g., “who won a Tony after 1960?”). While we agree that semantic parsers should be able to answer very complicated questions, in reality these questions are rarely issued by users.1 Because users can interact with a QA system repeatedly, there is no need to assume a single-turn QA setting where the exact question intent has to be captured with just one complex question. The same intent can be more naturally expressed through a sequence of simpler questions, as shown below: 1. What actresses won a Tony after 1960? 2. Of those, who later won an Oscar? 3. Who had the biggest gap between their two award wins? Decomposing complicated intents into multiple related but simpler questions is arguably a more effective strategy to explore a topic of interest, and it reduces the cognitive burden on both the person who asks the question and the one who answers it.2 In this work, we study semantic parsing for answering sequences of simple related questions. We collect a dataset of question sequences called SequentialQA (SQA; Section 2)3 by asking crowdsourced workers to decompose complicated questions sampled from the WikiTableQuestions dataset (Pasupat and Liang, 2015) into multiple easier ones. SQA, which contains 6,066 question sequences with 17,553 total question-answer pairs, is to the best of our knowledge the first semantic parsing dataset for sequential question answering. Section 3 describes our novel dynamic neural semantic parsing framework (DynSP), a weakly su1For instance, there are only 3.75% questions with more than 15 words in WikiAnswers (Fader et al., 2014). 2Studies have shown increased sentence complexity links to longer reading times (Hale, 2006; Levy, 2008; Frank, 2013). 3Available at http://aka.ms/sqa 1821 2010 Night Girl Earth Fire breath 2011 2009 Elemental Aarok First Appeared Harmonia Character Teleporting Gates Earth Home World Super strength 2007 Vyrga Powers Kathoon XS Super speed 2009 Dragonwing 1. Who are all of the super heroes? 2. Which of them come from Earth? 3. Of those, who appeared most recently? Original intent: What super hero from Earth appeared most recently? Legion of Super Heroes Post-Infinite Crisis Figure 1: An example question sequence created from a compositional question intent. Workers must write questions whose answers are subsets of cells in the table. pervised structured-output learning approach based on reward-guided search that is designed for solving sequential QA. We demonstrate in Section 4 that DynSP achieves higher accuracies than existing systems on SQA, and we offer a qualitative analysis of question types that our method answers effectively, as well as those on which it struggles. 2 A Dataset of Question Sequences We collect the SequentialQA (SQA) dataset via crowdsourcing by leveraging WikiTableQuestions (Pasupat and Liang, 2015, henceforth WTQ), which contains highly compositional questions associated with HTML tables from Wikipedia. Each crowdsourcing task contains a long, complex question originally from WTQ as the question intent. The workers are asked to compose a sequence of simpler questions that lead to the final intent; an example of this process is shown in Figure 1. To simplify the task for workers, we only use questions from WTQ whose answers are cells in the table, which excludes those involving arithmetic and counting. We likewise also restrict the questions our workers can write to those answerable by only table cells. These restrictions speed the annotation process because workers can just click on the table to answer their question. They also allow us to collect answer coordinates (row and column in the table) as opposed to answer text, which removes many normalization issues for answer string matching in evaluation. Finally, we only use long questions that contain nine or more words as intents; shorter questions tend to be simpler and are thus less amenable to decomposition. 2.1 Properties of SQA In total, we used 2,022 question intents from the train and test folds of the WTQ for decomposition. We had three workers decompose each intent, resulting in 6,066 unique questions sequences containing 17,553 total question-answer pairs (for an average of 2.9 questions per sequence). We divide the dataset into train and test using the original WTQ folds, resulting in an 83/17 train/test split. Importantly, just like in WTQ, none of the tables in the test set are seen in the training set. We identify three frequently-occurring question classes: column selection, subset selection, and row selection.4 In column selection questions, the answer is an entire column of the table; these questions account for 23% of all questions in SQA. Subset and row selection are more complicated than column selection, as they usually contain coreferences to the previous question’s answer. In subset selections, the answer is a subset of the previous question’s answer; similarly, the answers to row selections occur in the same row(s) as the previous answer but in a different column. Subset selections make up 27% of SQA, while row selections are an additional 19%. The remaining 31% contains more complex combinations of these three types. We also observe dramatic differences in the types of questions that are asked at each position of the sequence. For example, 51% of the first questions in the sequences are column selections (e.g., “what are all of the teams?”). This number dwindles to just 18% when we look at the second question of each sequence, which indicates that the collected sequences start with general questions and progress to more specific ones. 3 Dynamic Neural Semantic Parsing The unique setting of SQA provides both opportunities and challenges. On the one hand, it contains short questions with less compositionality, which in theory should reduce the difficulty of the semantic parsing problem; on the other hand, the additional contextual dependencies of the preceding questions and their answers increase modeling complexity. These observations lead us to propose a dynamic neural semantic parsing framework (DynSP) trained using a reward-guided search pro4In the example sequence “what are all of the tournaments? in which one did he score the least points? on what date was that?”, the first question is a column selection, the second is a subset selection, and the last one is a row selection. 1822 cedure for solving SQA. Given a question (optionally along with previous questions and answers) and a table, DynSP formulates the semantic parsing problem as a state–action search problem. Each state represents a complete or partial parse, while each action corresponds to an operation to extend a parse. The goal during inference is to find an end state with the highest score as the predicted parse. The quality of the induced semantic parse obviously depends on the scoring function. In our design, the score of a state is determined by the scores of actions taken from the initial state to the target state, which are predicted by different neural network modules based on action type. By leveraging a margin-based objective function, the model learning procedure resembles several structured-output learning algorithms such as structured SVMs (Tsochantaridis et al., 2005), but can take either strong or weak supervision seamlessly. DynSP is inspired by STAGG, a search-based semantic parser (Yih et al., 2015), as well as the dynamic neural module network (DNMN) of Andreas et al. (2016). Much like STAGG, DynSP chains together different modules as search progresses; however, these modules are implemented as neural networks, which enables end-to-end training as in DNMN. The key difference between DynSP and DNMN is that in DynSP the network structure of an example is not predetermined. Instead, different network structures are constructed dynamically as our learning procedure explores the state space. It is straightforward to answer sequential questions using our framework: we allow the model to take the previous question and its answers as input, with a slightly modified action space to reflect a dependent semantic parse. The same search / learning procedure is then able to effortlessly adapt to the new setting. In this section, we first describe the formal language underlying DynSP, followed by the model formulation and learning algorithm. 3.1 Semantic parse language Because tables are used as the data source to answer questions in SQA, we decide to form our semantic parses in an SQL-like language5. Our parses consist of two parts: a select statement and conjunctions of zero or more conditions. 5Our framework is not restricted to the formal language we use in this work. In addition, the structured query can be straightforwardly represented in other formal languages, such as the lambda DCS logic used in (Pasupat and Liang, 2015). A select statement is associated with a column name, which is referred to as the answer column. Conditions enforce additional constraints on which cells in the answer column can be chosen; a select statement without any conditions indicates that an entire column of the table is the answer to the question. In particular, each condition contains a column name as the condition column and an operator with zero or more arguments. The operators in this work include: =, ̸=, >, ≥, <, ≤, arg min, arg max. A cell in the answer column is only a legitimate answer if the cell of the corresponding row in the condition column satisfies the constraint defined by the operator and its arguments. As a concrete example, suppose the data source is the same table in Fig. 1. The semantic parse of the question “Which super heroes came from Earth and first appeared after 2009?” is “Select Character Where {Home World = Earth} ∧ {First Appeared > 2009}” and the answers are {Dragonwing, Harmonia}. In order to handle the sequential aspect of SQA, we extend the semantic parse language by adding a preamble statement subsequent. A subsequent statement contains only conditions, as it essentially adds constraints to the semantic parse of the previous question. For instance, if the follow-up question is “Which of them breathes fire?”, then the corresponding semantic parse is “Subsequent Where {Powers = Fire breath}”. The answer to this question is {Dragonwing}, a subset of the previous answer. 3.2 Model formulation We start introducing our model design by first defining the state and action space. Let S be the set of states and A the set of all actions. A state s ∈S is simply a sequence of variable length of actions {a1, a2, a3, · · · , at}, where ai ∈A. An empty sequence, s0 = φ, is a special state used as the starting point of search. As mentioned earlier, a state represents a (partial) semantic parse of one question. Each action is thus a legitimate operation that can be added to grow the semantic parse. Our action space design is tied closely to the statements defined by our parse language; in particular, an action instance is either a complete or partial statement, and action instances are grouped by type. For example, select and subsequent operations are two action types. A condition statement is formed by two different action types: 1823 Id Type # Action instances A1 Select-column # columns A2 Cond-column # columns A3 Op-Equal (=) # rows A4 Op-NotEqual (̸=) # rows A5 Op-GT (>) # numbers / datetimes A6 Op-GE (≥) # numbers / datetimes A7 Op-LT (<) # numbers / datetimes A8 Op-LE (≤) # numbers / datetimes A9 Op-ArgMin # numbers / datetimes A10 Op-ArgMax # numbers / datetimes A11 Subsequent 1 A12 S-Cond-column # columns A13 S-Op-Equal (=) # rows A14 S-Op-NotEqual (̸=) # rows A15 S-Op-GT (>) # numbers / datetimes A16 S-Op-GE (≥) # numbers / datetimes A17 S-Op-LT (<) # numbers / datetimes A18 S-Op-LE (≤) # numbers / datetimes A19 S-Op-ArgMin # numbers / datetimes A20 S-Op-ArgMax # numbers / datetimes Table 1: Types of actions and the number of action instances in each type. Numbers / datetimes are the mentions discovered in the question (plus the previous question if it is a subsequent condition). (1) selection of the condition column, and (2) the comparison operator. The instances of each action type differ in their arguments (e.g., column names, or specific cells in a column). Because conditions in a subsequent parse rely on previous questions and answers, they belong to different action types from regular conditions. Table 1 summarizes the action space defined in this work. Any state that represents a complete and legitimate parse is an end state. Notice that search does not necessarily need to stop at an end state, because adding more actions (e.g., condition statements) can lead to another end state. Take the same example question from before: “Which super heroes came from Earth and first appeared after 2009?”. One action sequence that represents the parse is {(A1) select-column Character, (A2) cond-column Home World, (A3) op-equal Earth, (A2) cond-column First Appeared, (A5) op-gt 2009}. Notice that many states represent semantically equivalent parses (e.g., those with the same actions ordered differently, or states with repeated conditions). To prune the search space, we introduce the function Act(s) ⊂A, which defines the actions that can be taken when given a state s. Borrowing the idea of staged state generation in (Yih et al., 2015), we choose a default ordering of actions based on their types, dictating that a select action must be picked first and that a conditionA1 A2 A3...A10 s0 A2 A11 A12 A13...A20 A12 Figure 2: Possible action transitions based on their types (see Table 1). Shaded circles are end states. column needs to be determined before the operator is chosen. The full transition diagram is presented in Fig. 2. Note that to implement this transition order, we only need to check the last action in the state. In addition, we also disallow adding duplicates of actions that already exist in the state. We use beam search to find an end state with the highest score for inference. Let st be a state consisting of a sequence of actions a1, a2, · · · , at. The state value function V is defined recursively as V (st) = V (st−1) + π(st−1, at), V (s0) = 0, where the policy function π(s, a) scores an action a ∈Act(s) given the current state. 3.3 Policy function The intuition behind the policy function can be summarized as follows. Halfway through the construction of a semantic parse, the policy function measures the quality of an immediate action that can be taken next given the current state (i.e., the question and actions that have previously been chosen). To enable integrated, end-to-end learning, the policy function in our framework is parameterized using neural networks. Because each action type has very different semantics, we design different network structures (i.e., modules) accordingly. Most of our network structures encourage learning semantic matching functions between the words in the question and table (either the column names or cells). Here we illustrate the design using the select-column action type (A1). Conceptually, the corresponding module is a combination of various matching scores. Let WQ be the embeddings of words in the question and WC be the embeddings of words in the target column name. The component matching functions are: fmax = 1 |WC| X wc∈WC max wq∈WQ wT q wc favg =   1 |WC| X wc∈WC wc   T   1 |WQ| X wq∈WQ wq   1824 Essentially, for each word in the column name, fmax finds the highest matching question word and outputs the average score. Conversely, favg simply uses the average word vectors of the question and column name and returns their inner product. In another variant of favg, we replace the question representation with the output of a bi-directional LSTM model. These matching component functions are combined by a 2-layer feed-forward neural network, which outputs a scalar value as the action score. Details of the neural module design for other action types can be found in Appendix A. 3.4 Model learning Because the state value function V is defined recursively as the sum of scores of actions in the sequence, the goal of model optimization is to learn the parameters in the neural networks behind the policy function. Let θ be the collection of all the model parameters. Then the state value function can be written as: Vθ(st) = Pt i=1 πθ(si−1, ai). In a fully supervised setting where the correct semantic parse of each question is available, learning the policy function can be reduced to a sequence prediction problem. However, while having full supervision leads to a better semantic parser, collecting the correct parses requires a much more sophisticated UI design (Yih et al., 2016). In many scenarios, such as the one in the SQA dataset, it is often the case that only the answers to the questions are available. Adapting a learning algorithm to this weakly supervised setting is thus critical. Generally speaking, weakly supervised semantic parsers operate on one assumption — a candidate semantic parse is treated as a correct one if it results in answers that are identical to the gold answers. Therefore, a straightforward modification of existing structured learning algorithms in our setting is to use any semantic parse found to evaluate to the correct answers during beam search as a reference parse, and then update the model parameters accordingly. In practice, however, this approach is often problematic: the search space can grow enormously, and when coupled with poor model performance early during training, this leads to beams that contain no parses evaluating to the correct answer. As a result, learning becomes inefficient and takes a long time to converge. In this work, we propose a conceptually simple learning algorithm for weakly supervised training that sidesteps the inefficient learning problem. Our key insight is to conduct inference using a beam search procedure guided by an approximate reward function. The search procedure is executed twice for each training example, one for finding the best possible reference semantic parse and the other for finding the predicted semantic parse to update the model. Our framework is suitable for learning from either implicit or explicit supervision, and is detailed in a companion paper (Peng et al., 2017). Below we describe how we adapt it to the semantic parsing problem in this work. Approximate reward Let A(s) be the answers retrieved by executing the semantic parse represented by state s, and let A∗be the set of gold answers of a given question. We define the reward R(s; A∗) = 1[A(s) = A∗], or the accuracy of the retrieved answers. We use R(s) as the abbreviation for R(s; A∗). A state s with R(s) = 1 is called a goal state. Directly using this reward function in search of goal states can be difficult, as rewards of most states are 0. However, even when the answers from a semantic parse are not completely correct, some overlap with the gold answers can still hint that the state is close to a goal state, thus providing useful information to guide search. To formalize this idea, we define an approximated reward ˜R(s) in this work using the Jaccard coefficient ( ˜R(s) = |A(s) ∩A∗|/|A(s) ∪A∗|). If s is a goal state, then obviously ˜R(s) = R(s) = 1. Also because our actions effectively add additional constraints to exclude some table cells, any succeeding states of s′ with ˜R(s′) = 0 will also have 0 approximate reward and can be pruned from search immediately. We use the approximate reward ˜R to guide our beam search to find the reference parses (i.e., goal states). Some variations of the approximate reward can be used to make learning more efficient. For instance, we use the model score for tie-breaking, effectively making the approximate reward function depend on the model parameters: ˜Rθ(s) = |A(s) ∩A∗|/|A(s) ∪A∗| + ϵVθ(s), (1) where ϵ is a small constant. When a goal state is not found, the state with the highest approximate reward can still be used as a surrogate reference. Updating parameters The model parameters are updated by first finding the most violated state ˆs and then comparing ˆs with a reference state s∗to compute a loss. The idea of finding the most violated state comes from Taskar et al. (2004), with the 1825 Algorithm 1 Model parameter updates 1: for pick a labeled data (x, A∗) do 2: s∗←arg max s∈E(x) ˜R(s; A∗) 3: ˆs ←arg max s∈E(x) Vθ(s) −˜R(s; A∗) 4: update θ by minimizing max(L(s), 0) 5: end for intuition that the learning algorithm should make the state value function behave similarly to the reward. Formally, for every state s, we would like the value function to satisfy the following constraint: Vθ(s∗) −Vθ(s) ≥R(s∗) −R(s) (2) R(s∗) −R(s) is thus the margin. As discussed above, we use approximate reward function ˜Rθ instead of the true reward. We want to update the model parameters θ to make sure that the constraint is satisfied. When the constraint is violated, the degree of violation can be written as: L(s) = Vθ(s) −Vθ(s∗) −˜Rθ(s) + ˜Rθ(s∗) (3) In the algorithm, we want to find the state such that the corresponding constraint is most violated. Finding the most violated state is then equivalent to finding the state with the highest value of Vθ(s) − ˜Rθ(s) as the other two terms are constant. Algorithm 1 sketches the key steps of our method in each iteration. It first picks a training instance (x and y), where x represents the table and the question, and y is the gold answer set. The approximate reward function ˜R is defined by y, while E(x) is the set of end states for this instance. Line 2 finds the best reference and Line 3 finds the most violated state, both relying on beam search for approximate inference. Line 4 computes the gradient of the loss in Eq. (3), which is then used in backpropagation to update the model parameters. 4 Experiments Since the questions in SQA are decomposed from those in WTQ, we compare our method, DynSP, to two existing semantic parsers designed for WTQ: (1) the floating parser (FP) of Pasupat and Liang (2015), and (2) the neural programmer (NP) of Neelakantan et al. (2017). We describe below each system’s configurations in more detail and qualitatively compare and contrast their performance on SQA. Floating parser: The floating parser (Pasupat and Liang, 2015) maps questions to logical forms and then executes them on the table to retrieve the answers. It was designed specifically for the WTQ task (achieving 37.0% accuracy on the WTQ test set) and differs from other semantic parsers by not anchoring predicates to tokens in the question, relying instead on typing constraints to reduce the search space. Using FP as-is results in poor performance on SQA because the system is configured for questions with single answers, while SQA contains many questions with multiple-cell answers. We address this issue by removing a pruning hyperparameter (tooManyValues) and features that add bias on the denotation size. Neural programmer: The neural programmer proposed by Neelakantan et al. (2017) has shown promising results on WTQ, achieving accuracies on par with those of FP. Similar to our method, NP contains specialized neural modules that perform discrete operations such as argmax and argmin, and it is able to chain together multiple modules to answer a single question. However, module selection in NP is computed via soft attention (Cho et al., 2014), and information is propagated from one module to the next using a recurrent neural network. Since module selection is not tied to a pre-defined parse language like DynSP, NP simply runs for a fixed number of recurrent timesteps per question rather than growing a parse until it is complete. Comparing the baseline systems: FP and NP exemplify two very different paradigms for designing a semantic parsing system to answer questions using structured data. FP is a feature-rich system that aims to output the correct semantic parse (in a logical parse language) for a given question. On the other hand, the end-to-end neural network of NP relies on its modular architectures to output a probability distribution over cells in a table given a question. While NP can learn more powerful neural matching functions between questions and tables than FP’s simpler feature-based matching, NP cannot produce a complete, discrete semantic parse, which means that its actions can only be interpreted coarsely by looking at the order of the modules selected at each timestep.6 Furthermore, FP’s design theoretically allows it to operate on partial tables 6Since NP uses a fixed number of timesteps for each question, the module order is not guaranteed to correspond to a complete parse. 1826 indirectly through an API, which is necessary if tables are large and stored in a backend database, while NP requires upfront access to the full tables to facilitate end-to-end model differentiability.7 Even though FP and NP are powerful systems designed for the more difficult, compositional questions in WTQ, our method outperforms both systems on SQA when we consider all questions within a sequence independently of each other (a fair comparison), demonstrating the power of our search-based semantic parsing framework. More interestingly, when we leverage the sequential information by including the subsequent action, our method improves almost 3% in absolute accuracy. DynSP combines the best parts of both FP and NP. Given a question, we try to generate its correct semantic parse in a formal language that can be predefined by the choice of structured data source (e.g., SQL). However, we push the burden of feature engineering to neural networks as in NP. Our framework is easier to extend to the sequential setting of SQA than either baseline system, requiring just the additional subsequent action. FP’s reliance on a hand-designed grammar necessitates extra rules that operate over partial tables from the previous question, which if added would blow up the search space. Meanwhile, modifying NP to handle sequential QA is non-trivial due to soft module and answer selection; it is not immediately clear how to constrain predictions for one question based on the probability distribution over table cells from the previous question in the sequence. To more fairly compare DynSP to the baseline systems, we also experiment with a “concatenated questions” setting, which allows the baselines to access sequential context. Here, we treat concatenated question prefixes of a sequence as additional training examples, where a question prefix includes all questions prior to the current question in the sequence. For example, suppose the question sequence is: 1. what are all of the teams? 2. of those, which won championships? For the second question, in addition to the original question–answer pair, we add the concatenated question sequence “what are all of the teams? of those, which won championships?” paired with the second question’s answer. We refer to these concatenated question baselines as FP+ and NP+. 7In fact, NP is restricted during training to only questions whose associated tables have fewer than a certain threshold of rows and columns due to computational constraints. 4.1 DynSP implementation details Unlike previous dynamic neural network frameworks (Andreas et al., 2016; Looks et al., 2017), where each example can have different but predetermined structure, DynSP needs to dynamically explores and constructs different neural network structures for each question. Therefore, we choose DyNet (Neubig et al., 2017) as our implementation platform for its flexibility in composing computation graphs. We optimize our model parameters using standard stochastic gradient descent. The word embeddings are initialized with 100-d pretrained GloVe vectors (Pennington et al., 2014) and fine-tuned during training with dropout rate 0.5. For follow-up questions, we choose uniformly at random to use either gold answers to the previous question or the model’s previous predictions.8 We constrain the maximum length of actions to 3 for computational efficiency and set the beam size to 15 in our reported models, as accuracy gains are negligible with larger beam sizes. We train our model for 30 epochs, although the best model on the validation set is usually found within the first 20 epochs. Only CPU is used in model training, and each epoch in the beam size 15 setting takes about 30 minutes to complete. 4.2 Results & Analysis Table 2 shows the results of the baseline systems as well as our method on SQA’s test set. For each system, we show both the overall accuracy, the sequence accuracy (the percentage of sequences for which every question was answered correctly), and the accuracy at each position in the sequence. Our method without any sequential information (DynSP) outperforms the standard baselines, and when the subsequent action is added (DynSP∗), we improve both overall and sequence accuracy over the concatenated-question baselines. With that said, all of the systems struggle to answer all questions within a sequence correctly, despite the fact that each individual question is simpler on average than those in WTQ. Most of the errors made by our system are due to either semantic matching challenges or limitations of the underlying parse language. In the middle example of Figure 3, the first question asks for a list of super heroes; from the model’s point of view, Real name is a more relevant column than Character, although the latter is correct. The second question also con8Only predicted answers are used at test time. 1827 Model All Seq Pos 1 Pos 2 Pos 3 FP 34.1 7.2 52.6 25.6 25.9 NP 39.4 10.8 58.9 35.9 24.6 DynSP 42.0 10.2 70.9 35.8 20.1 FP+ 33.2 7.7 51.4 22.2 22.3 NP+ 40.2 11.8 60.0 35.9 25.5 DynSP∗ 44.7 12.8 70.4 41.1 23.6 Table 2: Accuracies of all systems on SQA; the models in the first half of the table treat questions independently, while those in the second half consider sequential context. Our method outperforms existing ones both in terms of overall accuracy as well as sequence accuracy. tains a challenging matching problem where the unlisted home worlds referred to in the question are marked as Unknown in the table. Many of these matching issues are resolved by humans using common sense, which for computers requires far more data than is available in SQA to learn. Even when there are no tricky discrepancies between question and table text, questions are often complex enough that their semantic parses cannot be expressed in our parse language. Although trivial on the surface, the final question in the bottom sequence of Figure 3 is one such example; the correct semantic parse requires access to the answers of both the first and second question, actions that we have not currently implemented in our language due to concerns with the search space size. Increasing the number of complex actions requires designing smarter optimization procedures, which we leave to future work. 5 Related Work Previous work on conversational QA has focused on small, single-domain datasets. Perhaps most related to our task is the context-dependent sentence analysis described in (Zettlemoyer and Collins, 2009), where conversations between customers and travel agents are mapped to logical forms after resolving referential expressions. Another dataset of travel booking conversations is used by Artzi and Zettlemoyer (2011) to learn a semantic parser for complicated queries given user clarifications. More recently, Long et al. (2016) collect three contextual semantic parsing datasets (from synthetic domains) that contain coreferences to entities and 1. Which nations competed in the FINA women’s water polo cup? 2. Of these nations, which ones took home at least one gold medal? 3. Of those, which ranked in the top 2 positions? SELECT Nation SUBSEQUENT WHERE Gold != 0 SUBSEQUENT WHERE Rank <= 2 1. Who are all of the super heroes? 2. Which of those does not have a home world listed? SELECT SUBSEQUENT WHERE != Character Real name Home world Unknown Vyrga 1. How many naturalizations did Maghreb have in 2000? 2. How many naturalizations did North America have in 2000? 3. Which had more? SELECT 2000 SUBSEQUENT WHERE …Origin = North America WHERE = …Origin Maghreb SELECT 2000 WHERE = …Origin North America MAX SUBSEQUENT 1 SUBSEQUENT 2 SELECT …Origin WHERE 2000 = Figure 3: Parses computed by DynSP for three test sequences (actions in blue boxes, values from table in white boxes). Top: all three questions are parsed correctly. Middle: semantic matching errors cause the model to select incorrect columns and conditions. Bottom: The final question is unanswerable due to limitations of our parse language. actions. We differentiate ourselves from these prior works in two significant ways: first, our dataset is not restricted to a particular domain, and second, a major goal of our work is to analyze the different types of sequence progressions people create when they are trying to express a complicated intent. Complex, interactive QA tasks have also been proposed in the information retrieval community, where the data source is a corpus of newswire text (Kelly and Lin, 2007). We also build on aspects of some existing interactive question-answering systems. For example, the system of Harabagiu et al. (2005) includes a module that predicts what a user will ask next given their current question. Other than FP and NP, the work of Neural Symbolic Machines (NSM) (Liang et al., 2017) is perhaps the closest to ours. NSM aims to generate formal semantic parses of questions that can be executed on Freebase to retrieve answers, and is trained using the REINFORCE algorithm (Williams, 1992) augmented with approximate gold parses found in a separate curriculum learning stage. In comparison, finding reference parses is an integral part of our algorithm. Our non1828 probabilistic, margin-based objective function also helps avoid the need for empirical tricks to handle normalization and proper sampling, which are crucial when applying REINFORCE in practice. 6 Conclusion & Future Work In this work we move towards a conversational, multi-turn QA scenario in which systems must rely on prior context to answer the user’s current question. To this end, we introduce SQA, a dataset that consists of 6,066 unique sequences of inter-related questions about Wikipedia tables, with 17,553 questions-answer pairs in total. To the best of our knowledge, SQA is the first semantic parsing dataset that addresses sequential question answering. We propose DynSP, a dynamic neural semantic parsing framework, for solving SQA. By formulating semantic parsing as a state–action search problem, our method learns modular neural network models through reward-guided search. DynSP outperforms existing state-of-the-art systems designed for answering complex questions when applied to SQA, and increases the gain after incorporating the subsequent actions. In the future, we plan to investigate several interesting research questions triggered by this work. For instance, although our current formal language design covers most question types in SQA, it is nevertheless important to extend it further to make the semantic parser more robust (e.g., by including UNION or allowing comparison of multiple previous answers). Practically, allowing a more complicated semantic parse structure—either by increasing the number of primitive statements or the length of the parse—poses serious computational challenges in both model learning and inference. Because of the dynamic nature of our framework, it is not trivial to leverage the computational capabilities of GPUs using minibatched training; we plan to investigate ways to take full advantage of modern computing machinery in the near future. Finally, better resolution of semantic matching errors is a top priority, and unsupervised learning from large external corpora is one way to make progress in this direction. Acknowledgments We thank the anonymous reviewers for their insightful comments. We are also grateful to Panupong Pasupat for his help in configuring the floating parser baseline, and to Arvind Neelakantan for his help with the neural programmer model. References Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. In Conference of the North American Chapter of the Association for Computational Linguistics. Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Proceedings of Empirical Methods in Natural Language Processing. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of Empirical Methods in Natural Language Processing. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 1156– 1165. Stefan L Frank. 2013. Uncertainty reduction as a measure of cognitive load in sentence comprehension. Topics in Cognitive Science 5(3). John Hale. 2006. Uncertainty about the rest of the sentence. Cognitive Science 30(4). Sanda Harabagiu, Andrew Hickl, John Lehmann, and Dan Moldovan. 2005. Experiments with interactive question-answering. In Proceedings of the Association for Computational Linguistics. Diane Kelly and Jimmy Lin. 2007. Overview of the trec 2006 ciqa task. In ACM SIGIR Forum. ACM, volume 41, pages 107–116. Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition 106(3). Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on Freebase with weak supervision. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Vancouver, Canada. Percy Liang. 2016. Learning executable semantic parsers for natural language understanding. Communications of the ACM 59(9):68–76. Reginald Long, Panupong Pasupat, and Percy Liang. 2016. Simpler context-dependent logical forms via model projections. In Proceedings of the Association for Computational Linguistics. 1829 Moshe Looks, Marcello Herreshoff, DeLesley Hutchins, and Peter Norvig. 2017. Deep learning with dynamic computation graphs. In Proceedings of the International Conference on Learning Representations. Arvind Neelakantan, Quoc Le, Martin Abadi, Andrew McCallum, and Dario Amodei. 2017. Learning a natural language interface with neural programmer. In Proceedings of the International Conference on Learning Representations. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. DyNet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 . Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the Association for Computational Linguistics. Haoruo Peng, Ming-Wei Chang, and Wen-tau Yih. 2017. Maximum margin reward networks for learning from explicit and implicit supervision. Manuscript Submitted for Publication. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of Empirical Methods in Natural Language Processing. Ben Taskar, Carlos Guestrin, and Daphne Koller. 2004. Max-margin Markov networks. In Proceedings of Advances in Neural Information Processing Systems. Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. 2005. Large margin methods for structured and interdependent output variables. Journal of machine learning research 6(Sep):1453–1484. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning 8(3-4):229–256. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Annual Meeting of the Association for Computational Linguistics (ACL). pages 1321–1331. Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany, pages 201–206. Luke Zettlemoyer and Michael Collins. 2009. Learning context-dependent mappings from sentences to logical form. In Proceedings of the Association for Computational Linguistics. A Action Neural Module Design We describe here the neural module design for each action. As most actions try to match question text to column names or table entries, the neural network architectures are essentially various kinds of semantic similarity matching functions. A1 Select-column Conceptually, the corresponding module is a combination of various matching scores. Let WQ be the embeddings of words in the question and WC be the embeddings of words in the target column name. The component matching functions are: fmax = 1 |WC| X wc∈WC max wq∈WQ wT q wc favg =   1 |WC| X wc∈WC wc   T   1 |WQ| X wq∈WQ wq   Essentially, for each word in the column name, fmax finds the highest matching question word and outputs the average score. Conversely, favg simply uses the average word vectors of the question and column name and returns their inner product. In another variant of favg, we replace the question representation with the output of a bidirectional LSTM model. These matching component functions are combined by a 2-layer feed-forward neural network, which outputs a scalar value as the action score. A2 Cond-column Because this action also tries to find the correct column (but for conditions), we use the same matching scoring functions as in A1 module. However, a different 2-layer feed-forward neural network is used to combine the scores, as well as two binary features that indicate whether all the cells in this column are numeric values or not. A3 Op-Equal This action checks whether a particular column value matches the question text. Suppose the average of the word vectors of the particular cell is wx and the question word vectors are WQ. Here the matching function is: fmax = max wq∈WQ wT q wx 1830 A4 Op-NotEqual The neural module for this action extends the design for A3. It first uses a max function similar to fmax in A3 to compare the vector of the negation word “not”, and the question words. This score is combined with the fmax score in A3 using a 2-layer feed-forward neural network as the final module score. A5-A8 Op-GT, Op-GE, Op-LT, Op-LE The arguments of these comparison operations are extracted from question in advance. Therefore, the action modules just need to decide whether such relations are indeed used in the question. We take a simple strategy by initialing a special word vector that tries to capture the semantics of the relation. Take op-gt, greater than, for example. We use the average of the vectors of words like more, greater and larger to initialize the special word vector, denoted as wgt. Let warg be the averaged vectors of words within a [−2, +2] window centered at the argument in the question. The inner product of wgt and warg is then used as the scoring function. A9-A10 Op-ArgMin, Op-ArgMax We handle ArgMin and ArgMax similarly to the comparison operations. The difference is that we compare the special word vector to the averaged vector of all the question words, instead of a short subsequence of words. Subsequent actions The modules in subsequent actions use basically the same design as their counterparts in the independent question setting. The main difference is that we extend the question representation to words from not just the target question, but also the question that immediately precedes it. 1831
2017
167
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1832–1846 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1168 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1832–1846 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1168 Gated-Attention Readers for Text Comprehension Bhuwan Dhingra∗ Hanxiao Liu∗ Zhilin Yang William W. Cohen Ruslan Salakhutdinov School of Computer Science Carnegie Mellon University {bdhingra,hanxiaol,zhiliny,wcohen,rsalakhu}@cs.cmu.edu Abstract In this paper we study the problem of answering cloze-style questions over documents. Our model, the Gated-Attention (GA) Reader1, integrates a multi-hop architecture with a novel attention mechanism, which is based on multiplicative interactions between the query embedding and the intermediate states of a recurrent neural network document reader. This enables the reader to build query-specific representations of tokens in the document for accurate answer selection. The GA Reader obtains state-of-the-art results on three benchmarks for this task–the CNN & Daily Mail news stories and the Who Did What dataset. The effectiveness of multiplicative interaction is demonstrated by an ablation study, and by comparing to alternative compositional operators for implementing the gated-attention. 1 Introduction A recent trend to measure progress towards machine reading is to test a system’s ability to answer questions about a document it has to comprehend. Towards this end, several large-scale datasets of cloze-style questions over a context document have been introduced recently, which allow the training of supervised machine learning systems (Hermann et al., 2015; Hill et al., 2016; Onishi et al., 2016). Such datasets can be easily constructed automatically and the unambiguous nature of their queries provides an objective benchmark to measure a system’s performance at text comprehension. ∗BD and HL contributed equally to this work. 1Source code is available on github: https:// github.com/bdhingra/ga-reader Deep learning models have been shown to outperform traditional shallow approaches on text comprehension tasks (Hermann et al., 2015). The success of many recent models can be attributed primarily to two factors: (1) Multi-hop architectures (Weston et al., 2015; Sordoni et al., 2016; Shen et al., 2016), allow a model to scan the document and the question iteratively for multiple passes. (2) Attention mechanisms, (Chen et al., 2016; Hermann et al., 2015) borrowed from the machine translation literature (Bahdanau et al., 2014), allow the model to focus on appropriate subparts of the context document. Intuitively, the multi-hop architecture allows the reader to incrementally refine token representations, and the attention mechanism re-weights different parts in the document according to their relevance to the query. The effectiveness of multi-hop reasoning and attentions have been explored orthogonally so far in the literature. In this paper, we focus on combining both in a complementary manner, by designing a novel attention mechanism which gates the evolving token representations across hops. More specifically, unlike existing models where the query attention is applied either token-wise (Hermann et al., 2015; Kadlec et al., 2016; Chen et al., 2016; Hill et al., 2016) or sentence-wise (Weston et al., 2015; Sukhbaatar et al., 2015) to allow weighted aggregation, the Gated-Attention (GA) module proposed in this work allows the query to directly interact with each dimension of the token embeddings at the semantic-level, and is applied layer-wise as information filters during the multi-hop representation learning process. Such a fine-grained attention enables our model to learn conditional token representations w.r.t. the given question, leading to accurate answer selections. We show in our experiments that the proposed GA reader, despite its relative simplicity, consis1832 tently improves over a variety of strong baselines on three benchmark datasets . Our key contribution, the GA module, provides a significant improvement for large datasets. Qualitatively, visualization of the attentions at intermediate layers of the GA reader shows that in each layer the GA reader attends to distinct salient aspects of the query which help in determining the answer. 2 Related Work The cloze-style QA task involves tuples of the form (d, q, a, C), where d is a document (context), q is a query over the contents of d, in which a phrase is replaced with a placeholder, and a is the answer to q, which comes from a set of candidates C. In this work we consider datasets where each candidate c ∈C has at least one token which also appears in the document. The task can then be described as: given a document-query pair (d, q), find a ∈C which answers q. Below we provide an overview of representative neural network architectures which have been applied to this problem. LSTMs with Attention: Several architectures introduced in Hermann et al. (2015) employ LSTM units to compute a combined document-query representation g(d, q), which is used to rank the candidate answers. These include the DeepLSTM Reader which performs a single forward pass through the concatenated (document, query) pair to obtain g(d, q); the Attentive Reader which first computes a document vector d(q) by a weighted aggregation of words according to attentions based on q, and then combines d(q) and q to obtain their joint representation g(d(q), q); and the Impatient Reader where the document representation is built incrementally. The architecture of the Attentive Reader has been simplified recently in Stanford Attentive Reader, where shallower recurrent units were used with a bilinear form for the query-document attention (Chen et al., 2016). Attention Sum: The Attention-Sum (AS) Reader (Kadlec et al., 2016) uses two bidirectional GRU networks (Cho et al., 2015) to encode both d and q into vectors. A probability distribution over the entities in d is obtained by computing dot products between q and the entity embeddings and taking a softmax. Then, an aggregation scheme named pointer-sum attention is further applied to sum the probabilities of the same entity, so that frequent entities the document will be favored compared to rare ones. Building on the AS Reader, the Attention-over-Attention (AoA) Reader (Cui et al., 2017) introduces a two-way attention mechanism where the query and the document are mutually attentive to each other. Mulit-hop Architectures: Memory Networks (MemNets) were proposed in Weston et al. (2015), where each sentence in the document is encoded to a memory by aggregating nearby words. Attention over the memory slots given the query is used to compute an overall memory and to renew the query representation over multiple iterations, allowing certain types of reasoning over the salient facts in the memory and the query. Neural Semantic Encoders (NSE) (Munkhdalai & Yu, 2017a) extended MemNets by introducing a write operation which can evolve the memory over time during the course of reading. Iterative reasoning has been found effective in several more recent models, including the Iterative Attentive Reader (Sordoni et al., 2016) and ReasoNet (Shen et al., 2016). The latter allows dynamic reasoning steps and is trained with reinforcement learning. Other related works include Dynamic Entity Representation network (DER) (Kobayashi et al., 2016), which builds dynamic representations of the candidate answers while reading the document, and accumulates the information about an entity by max-pooling; EpiReader (Trischler et al., 2016) consists of two networks, where one proposes a small set of candidate answers, and the other reranks the proposed candidates conditioned on the query and the context; Bi-Directional Attention Flow network (BiDAF) (Seo et al., 2017) adopts a multi-stage hierarchical architecture along with a flow-based attention mechanism; Bajgar et al. (2016) showed a 10% improvement on the CBT corpus (Hill et al., 2016) by training the AS Reader on an augmented training set of about 14 million examples, making a case for the community to exploit data abundance. The focus of this paper, however, is on designing models which exploit the available data efficiently. 3 Gated-Attention Reader Our proposed GA readers perform multiple hops over the document (context), similar to the Memory Networks architecture (Sukhbaatar et al., 2015). Multi-hop architectures mimic the multistep comprehension process of human readers, and have shown promising results in several recent models for text comprehension (Sordoni et al., 1833 2016; Kumar et al., 2016; Shen et al., 2016). The contextual representations in GA readers, namely the embeddings of words in the document, are iteratively refined across hops until reaching a final attention-sum module (Kadlec et al., 2016) which maps the contextual representations in the last hop to a probability distribution over candidate answers. The attention mechanism has been introduced recently to model human focus, leading to significant improvement in machine translation and image captioning (Bahdanau et al., 2014; Mnih et al., 2014). In reading comprehension tasks, ideally, the semantic meanings carried by the contextual embeddings should be aware of the query across hops. As an example, human readers are able to keep the question in mind during multiple passes of reading, to successively mask away information irrelevant to the query. However, existing neural network readers are restricted to either attend to tokens (Hermann et al., 2015; Chen et al., 2016) or entire sentences (Weston et al., 2015), with the assumption that certain sub-parts of the document are more important than others. In contrast, we propose a finer-grained model which attends to components of the semantic representation being built up by the GRU. The new attention mechanism, called gated-attention, is implemented via multiplicative interactions between the query and the contextual embeddings, and is applied per hop to act as fine-grained information filters during the multi-step reasoning. The filters weigh individual components of the vector representation of each token in the document separately. The design of gated-attention layers is motivated by the effectiveness of multiplicative interaction among vector-space representations, e.g., in various types of recurrent units (Hochreiter & Schmidhuber, 1997; Wu et al., 2016) and in relational learning (Yang et al., 2014; Kiros et al., 2014). While other types of compositional operators are possible, such as concatenation or addition (Mitchell & Lapata, 2008), we find that multiplication has strong empirical performance (section 4.3), where query representations naturally serve as information filters across hops. 3.1 Model Details Several components of the model use a Gated Recurrent Unit (GRU) (Cho et al., 2015) which maps an input sequence X = [x1, x2, . . . , xT ] to an ouput sequence H = [h1, h2, . . . , hT ] as follows: rt = σ(Wrxt + Urht−1 + br), zt = σ(Wzxt + Uzht−1 + bz), ˜ht = tanh(Whxt + Uh(rt ⊙ht−1) + bh), ht = (1 −zt) ⊙ht−1 + zt ⊙˜ht. where ⊙denotes the Hadamard product or the element-wise multiplication. rt and zt are called the reset and update gates respectively, and ˜ht the candidate output. A Bi-directional GRU (BiGRU) processes the sequence in both forward and backward directions to produce two sequences [hf 1, hf 2, . . . , hf T ] and [hb 1, hb 2, . . . , hb T ], which are concatenated at the output ←→ GRU(X) = [hf 1∥hb T , . . . , hf T ∥hb 1] (1) where ←→ GRU(X) denotes the full output of the Bi-GRU obtained by concatenating each forward state hf i and backward state hb T−i+1 at step i given the input X. Note ←→ GRU(X) is a matrix in R2nh×T where nh is the number of hidden units in GRU. Let X(0) = [x(0) 1 , x(0) 2 , . . . x(0) |D|] denote the token embeddings of the document, which are also inputs at layer 1 for the document reader below, and Y = [y1, y2, . . . y|Q|] denote the token embeddings of the query. Here |D| and |Q| denote the document and query lengths respectively. 3.1.1 Multi-Hop Architecture Fig. 1 illustrates the Gated-Attention (GA) reader. The model reads the document and the query over K horizontal layers, where layer k receives the contextual embeddings X(k−1) of the document from the previous layer. The document embeddings are transformed by taking the full output of a document Bi-GRU (indicated in blue in Fig. 1): D(k) = ←→ GRU (k) D (X(k−1)) (2) At the same time, a layer-specific query representation is computed as the full output of a separate query Bi-GRU (indicated in green in Figure 1): Q(k) = ←→ GRU (k) Q (Y ) (3) Next, Gated-Attention is applied to D(k) and Q(k) to compute inputs for the next layer X(k). X(k) = GA(D(k), Q(k)) (4) where GA is defined in the following subsection. 1834 Figure 1: Gated-Attention Reader. Dashed lines represent dropout connections. 3.1.2 Gated-Attention Module For brevity, let us drop the superscript k in this subsection as we are focusing on a particular layer. For each token di in D, the GA module forms a token-specific representation of the query ˜qi using soft attention, and then multiplies the query representation element-wise with the document token representation. Specifically, for i = 1, . . . , |D|: αi = softmax(Q⊤di) (5) ˜qi = Qαi xi = di ⊙˜qi (6) In equation (6) we use the multiplication operator to model the interactions between di and ˜qi. In the experiments section, we also report results for other choices of gating functions, including addition xi = di + ˜qi and concatenation xi = di∥˜qi. 3.1.3 Answer Prediction Let q(K) ℓ = qf ℓ∥qb T−ℓ+1 be an intermediate output of the final layer query Bi-GRU at the location ℓof the cloze token in the query, and D(K) = ←→ GRU (K) D (X(K−1)) be the full output of final layer document Bi-GRU. To obtain the probability that a particular token in the document answers the query, we take an inner-product between these two, and pass through a softmax layer: s = softmax((q(K) ℓ )T D(K)) (7) where vector s defines a probability distribution over the |D| tokens in the document. The probability of a particular candidate c ∈C as being the answer is then computed by aggregating the probabilities of all document tokens which appear in c and renormalizing over the candidates: Pr(c|d, q) ∝ X i∈I(c,d) si (8) where I(c, d) is the set of positions where a token in c appears in the document d. This aggregation operation is the same as the pointer sum attention applied in the AS Reader (Kadlec et al., 2016). Finally, the candidate with maximum probability is selected as the predicted answer: a∗= argmaxc∈C Pr(c|d, q). (9) During the training phase, model parameters of GA are updated w.r.t. a cross-entropy loss between the predicted probabilities and the true answers. 3.1.4 Further Enhancements Character-level Embeddings: Given a token w from the document or query, its vector space representation is computed as x = L(w)||C(w). L(w) retrieves the word-embedding for w from a lookup table L ∈R|V |×nl, whose rows hold a vector for each unique token in the vocabulary. We also utilize a character composition model C(w) which generates an orthographic embedding of the token. Such embeddings have been previously shown to be helpful for tasks like Named Entity Recognition (Yang et al., 2016) and dealing with OOV tokens at test time (Dhingra et al., 2016). The embedding C(w) is generated by taking the final outputs zf nc and zb nc of a Bi-GRU applied to embeddings from 1835 a lookup table of characters in the token, and applying a linear transformation: z = zf nc||zb nc C(w) = Wz + b Question Evidence Common Word Feature (qecomm): Li et al. (2016) recently proposed a simple token level indicator feature which significantly boosts reading comprehension performance in some cases. For each token in the document we construct a one-hot vector fi ∈{0, 1}2 indicating its presence in the query. It can be incorporated into the GA reader by assigning a feature lookup table F ∈RnF ×2 (we use nF = 2), taking the feature embedding ei = fT i F and appending it to the inputs of the last layer document BiGRU as, x(K) i ∥fi for all i. We conducted several experiments both with and without this feature and observed some interesting trends, which are discussed below. Henceforth, we refer to this feature as the qe-comm feature or just feature. 4 Experiments and Results 4.1 Datasets We evaluate the GA reader on five large-scale datasets recently proposed in the literature. The first two, CNN and Daily Mail news stories2 consist of articles from the popular CNN and Daily Mail websites (Hermann et al., 2015). A query over each article is formed by removing an entity from the short summary which follows the article. Further, entities within each article were anonymized to make the task purely a comprehension one. N-gram statistics, for instance, computed over the entire corpus are no longer useful in such an anonymized corpus. The next two datasets are formed from two different subsets of the Children’s Book Test (CBT)3 (Hill et al., 2016). Documents consist of 20 contiguous sentences from the body of a popular children’s book, and queries are formed by deleting a token from the 21st sentence. We only focus on subsets where the deleted token is either a common noun (CN) or named entity (NE) since simple language models already give human-level performance on the other types (cf. (Hill et al., 2016)). 2https://github.com/deepmind/rc-data 3http://www.thespermwhale.com/jaseweston/babi/ CBTest.tgz The final dataset is Who Did What4 (WDW) (Onishi et al., 2016), constructed from the LDC English Gigaword newswire corpus. First, article pairs which appeared around the same time and with overlapping entities are chosen, and then one article forms the document and a cloze query is constructed from the other. Missing tokens are always person named entities. Questions which are easily answered by simple baselines are filtered out, to make the task more challenging. There are two versions of the training set—a small but focused “Strict” version and a large but noisy “Relaxed” version. We report results on both settings which share the same validation and test sets. Statistics of all the datasets used in our experiments are summarized in the Appendix (Table 5). 4.2 Performance Comparison Tables 1 and 3 show a comparison of the performance of GA Reader with previously published results on WDW and CNN, Daily Mail, CBT datasets respectively. The numbers reported for GA Reader are for single best models, though we compare to both ensembles and single models from prior work. GA Reader-- refers to an earlier version of the model, unpublished but described in a preprint, with the following differences—(1) it does not utilize token-specific attentions within the GA module, as described in equation (5), (2) it does not use a character composition model, (3) it is initialized with word embeddings pretrained on the corpus itself rather than GloVe. A detailed analysis of these differences is studied in the next section. Here we present 4 variants of the latest GA Reader, using combinations of whether the qe-comm feature is used (+feature) or not, and whether the word lookup table L(w) is updated during training or fixed to its initial value. Other hyperparameters are listed in Appendix A. Interestingly, we observe that feature engineering leads to significant improvements for WDW and CBT datasets, but not for CNN and Daily Mail datasets. We note that anonymization of the latter datasets means that there is already some feature engineering (it adds hints about whether a token is an entity), and these are much larger than the other four. In machine learning it is common to see the effect of feature engineering diminish with increasing data size. Similarly, fixing the word embeddings provides an improvement for the WDW 4https://tticnlp.github.io/who_did_what/ 1836 Table 1: Validation/Test accuracy (%) on WDW dataset for both “Strict” and “Relaxed” settings. Results with “†” are cf previously published works. Model Strict Relaxed Val Test Val Test Human † – 84 – – Attentive Reader † – 53 – 55 AS Reader † – 57 – 59 Stanford AR † – 64 – 65 NSE † 66.5 66.2 67.0 66.7 GA-- † – 57 – 60.0 GA (update L(w)) 67.8 67.0 67.0 66.6 GA (fix L(w)) 68.3 68.0 69.6 69.1 GA (+feature, update L(w)) 70.1 69.5 70.9 71.0 GA (+feature, fix L(w)) 71.6 71.2 72.6 72.6 Table 2: Top: Performance of different gating functions. Bottom: Effect of varying the number of hops K. Results on WDW without using the qe-comm feature and with fixed L(w). Gating Function Accuracy Val Test Sum 64.9 64.5 Concatenate 64.4 63.7 Multiply 68.3 68.0 K 1 (AS) † – 57 2 65.6 65.6 3 68.3 68.0 4 68.3 68.2 and CBT, but not for CNN and Daily Mail. This is not surprising given that the latter datasets are larger and less prone to overfitting. Comparing with prior work, on the WDW dataset the basic version of the GA Reader outperforms all previously published models when trained on the Strict setting. By adding the qecomm feature the performance increases by 3.2% and 3.5% on the Strict and Relaxed settings respectively to set a new state of the art on this dataset. On the CNN and Daily Mail datasets the GA Reader leads to an improvement of 3.2% and 4.3% respectively over the best previous single models. They also outperform previous ensemble models, setting a new state of that art for both datasets. For CBT-NE, GA Reader with the qecomm feature outperforms all previous single and ensemble models except the AS Reader trained on the much larger BookTest Corpus (Bajgar et al., 2016). Lastly, on CBT-CN the GA Reader with the qe-comm feature outperforms all previously published single models except the NSE, and AS Reader trained on a larger corpus. For each of the 4 datasets on which GA achieves the top performance, we conducted one-sample proportion tests to test whether GA is significantly better than the second-best baseline. The p-values are 0.319 for CNN, <0.00001 for DailyMail, 0.028 for CBTNE, and <0.00001 for WDW. In other words, GA statistically significantly outperforms all other baselines on 3 out of those 4 datasets at the 5% significance level. The results could be even more significant under paired tests, however we did not have access to the predictions from the baselines. 4.3 GA Reader Analysis In this section we do an ablation study to see the effect of Gated Attention. We compare the GA Reader as described here to a model which is exactly the same in all aspects, except that it passes document embeddings D(k) in each layer directly to the inputs of the next layer without using the GA module. In other words X(k) = D(k) for all k > 0. This model ends up using only one query GRU at the output layer for selecting the answer from the document. We compare these two variants both with and without the qe-comm feature on CNN and WDW datasets for three subsets of the training data - 50%, 75% and 100%. Test set accuracies for these settings are shown in Figure 2. On CNN when tested without feature engineering, we observe that GA provides a significant boost in performance compared to without GA. When tested with the feature it still gives an improvement, but the improvement is significant only with 100% training data. On WDW-Strict, which is a third of the size of CNN, without the feature we see an improvement when using GA versus without using GA, which becomes significant as the training set size increases. When tested with the feature on WDW, for a small data size without GA does better than with GA, but as the dataset size increases they become equivalent. We conclude that GA provides a boost in the absence of feature engineering, or as the training set size increases. Next we look at the question of how to gate intermediate document reader states from the query, i.e. what operation to use in equation 6. Table 1837 Table 3: Validation/Test accuracy (%) on CNN, Daily Mail and CBT. Results marked with “†” are cf previously published works. Results marked with “‡” were obtained by training on a larger training set. Best performance on standard training sets is in bold, and on larger training sets in italics. Model CNN Daily Mail CBT-NE CBT-CN Val Test Val Test Val Test Val Test Humans (query) † – – – – – 52.0 – 64.4 Humans (context + query) † – – – – – 81.6 – 81.6 LSTMs (context + query) † – – – – 51.2 41.8 62.6 56.0 Deep LSTM Reader † 55.0 57.0 63.3 62.2 – – – – Attentive Reader † 61.6 63.0 70.5 69.0 – – – – Impatient Reader † 61.8 63.8 69.0 68.0 – – – – MemNets † 63.4 66.8 – – 70.4 66.6 64.2 63.0 AS Reader † 68.6 69.5 75.0 73.9 73.8 68.6 68.8 63.4 DER Network † 71.3 72.9 – – – – – – Stanford AR (relabeling) † 73.8 73.6 77.6 76.6 – – – – Iterative Attentive Reader † 72.6 73.3 – – 75.2 68.6 72.1 69.2 EpiReader † 73.4 74.0 – – 75.3 69.7 71.5 67.4 AoA Reader † 73.1 74.4 – – 77.8 72.0 72.2 69.4 ReasoNet † 72.9 74.7 77.6 76.6 – – – – NSE † – – – – 78.2 73.2 74.3 71.9 BiDAF † 76.3 76.9 80.3 79.6 – – – – MemNets (ensemble) † 66.2 69.4 – – – – – – AS Reader (ensemble) † 73.9 75.4 78.7 77.7 76.2 71.0 71.1 68.9 Stanford AR (relabeling,ensemble) † 77.2 77.6 80.2 79.2 – – – – Iterative Attentive Reader (ensemble) † 75.2 76.1 – – 76.9 72.0 74.1 71.0 EpiReader (ensemble) † – – – – 76.6 71.8 73.6 70.6 AS Reader (+BookTest) † ‡ – – – – 80.5 76.2 83.2 80.8 AS Reader (+BookTest,ensemble) † ‡ – – – – 82.3 78.4 85.7 83.7 GA-73.0 73.8 76.7 75.7 74.9 69.0 69.0 63.9 GA (update L(w)) 77.9 77.9 81.5 80.9 76.7 70.1 69.8 67.3 GA (fix L(w)) 77.9 77.8 80.4 79.6 77.2 71.4 71.6 68.0 GA (+feature, update L(w)) 77.3 76.9 80.7 80.0 77.2 73.3 73.0 69.8 GA (+feature, fix L(w)) 76.7 77.4 80.0 79.3 78.5 74.9 74.4 70.7 2 (top) shows the performance on WDW dataset for three common choices – sum (x = d + q), concatenate (x = d∥q) and multiply (x = d⊙q). Empirically we find element-wise multiplication does significantly better than the other two, which justifies our motivation to “filter” out document features which are irrelevant to the query. At the bottom of Table 2 we show the effect of varying the number of hops K of the GA Reader on the final performance. We note that for K = 1, our model is equivalent to the AS Reader without any GA modules. We see a steep and steady rise in accuracy as the number of hops is increased from K = 1 to 3, which remains constant beyond that. This is a common trend in machine learning as model complexity is increased, however we note that a multi-hop architecture is important to achieve a high performance for this task, and provide further evidence for this in the next section. 4.4 Ablation Study for Model Components Table 4 shows accuracy on WDW by removing one component at a time. The steepest reduction is observed when we replace pretrained GloVe vectors with those pretrained on the corpus itself. GloVe vectors were trained on a large corpus of about 6 billion tokens (Pennington et al., 2014), and provide an important source of prior knowl1838 Figure 2: Performance in accuracy with and without the Gated-Attention module over different training sizes. p-values for an exact one-sided Mcnemar’s test are given inside the parentheses for each setting. 0.66 0.68 0.7 0.72 0.74 0.76 0.78 50% (<0.01) 75% (<0.01) 100% (<0.01) CNN (w/o qe-comm feature) No Gating With Gating 0.66 0.68 0.7 0.72 0.74 0.76 0.78 50% (0.07) 75% (0.13) 100% (<0.01) CNN (w qe-comm feature) No Gating With Gating 0.6 0.61 0.62 0.63 0.64 0.65 0.66 0.67 0.68 0.69 0.7 50% (0.28) 75% (<0.01) 100% (<0.01) WDW (w/o qe-comm feature) No Gating With Gating 0.6 0.61 0.62 0.63 0.64 0.65 0.66 0.67 0.68 0.69 0.7 50% (<0.01) 75% (0.42) 100% (0.27) WDW (w qe-comm feature) No Gating With Gating Table 4: Ablation study on WDW dataset, without using the qe-comm feature and with fixed L(w). Results marked with † are cf Onishi et al. (2016). Model Accuracy Val Test GA 68.3 68.0 −char 66.9 66.9 −token-attentions (eq. 5) 65.7 65.0 −glove, +corpus 64.0 62.5 GA--† – 57 edge for the model. Note that the strongest baseline on WDW, NSE (Munkhdalai & Yu, 2017b), also uses pretrained GloVe vectors, hence the comparison is fair in that respect. Next, we observe a substantial drop when removing tokenspecific attentions over the query in the GA module, which allow gating individual tokens in the document only by parts of the query relevant to that token rather than the overall query representation. Finally, removing the character embeddings, which were only used for WDW and CBT, leads to a reduction of about 1% in the performance. 4.5 Attention Visualization To gain an insight into the reading process employed by the model we analyzed the attention distributions at intermediate layers of the reader. Figure 3 shows an example from the validation set of WDW dataset (several more are in the Appendix). In each figure, the left and middle plots visualize attention over the query (equation 5) for candidates in the document after layers 1 & 2 respectively. The right plot shows attention over candidates in the document of cloze placeholder (XXX) in the query at the final layer. The full document, query and correct answer are shown at the bottom. A generic pattern observed in these examples is that in intermediate layers, candidates in the document (shown along rows) tend to pick out salient tokens in the query which provide clues about the cloze, and in the final layer the candidate with the highest match with these tokens is selected as the answer. In Figure 3 there is a high attention of the correct answer on financial regulatory standards in the first layer, and on us president in the second layer. The incorrect answer, in contrast, only attends to one of these aspects, and hence receives a lower score in the final layer despite the n-gram overlap it has with the cloze token in the query. Importantly, different layers tend to focus on different tokens in the query, supporting the hypothesis that the multihop architecture of GA Reader is able to combine distinct pieces of information to answer the query. 5 Conclusion We presented the Gated-Attention reader for answering cloze-style questions over documents. The GA reader features a novel multiplicative gating mechanism, combined with a multi-hop architecture. Our model achieves the state-of-theart performance on several large-scale benchmark datasets with more than 4% improvements over competitive baselines. Our model design is backed up by an ablation study showing statistically significant improvements of using Gated Attention as information filters. We also showed empirically that multiplicative gating is superior to addi1839 Figure 3: Layer-wise attention visualization of GA Reader trained on WDW-Strict. See text for details. tion and concatenation operations for implementing gated-attentions, though a theoretical justification remains part of future research goals. Analysis of document and query attentions in intermediate layers of the reader further reveals that the model iteratively attends to different aspects of the query to arrive at the final answer. In this paper we have focused on text comprehension, but we believe that the Gated-Attention mechanism may benefit other tasks as well where multiple sources of information interact. Acknowledgments This work was funded by NSF under CCF1414030 and Google Research. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. Embracing data abundance: Booktest dataset for reading comprehension. arXiv preprint arXiv:1610.00956, 2016. Danqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading comprehension task. ACL, 2016. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. ACL, 2015. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-attention neural networks for reading comprehension. ACL, 2017. Bhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, and William W Cohen. Tweet2vec: Character-based distributed representations for social media. ACL, 2016. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1684–1692, 2015. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children’s books with explicit memory representations. ICLR, 2016. Sepp Hochreiter and J¨urgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735– 1780, 1997. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. ACL, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015. Ryan Kiros, Richard Zemel, and Ruslan R Salakhutdinov. A multiplicative model for learning distributed text-based attribute representations. In Advances in Neural Information Processing Systems, pp. 2348– 2356, 2014. 1840 Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic entity representations with max-pooling improves machine reading. In NAACLHLT, 2016. Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. ICML, 2016. Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. Dataset and neural recurrent sequence labeling model for opendomain factoid question answering. arXiv preprint arXiv:1607.06275, 2016. Jeff Mitchell and Mirella Lapata. Vector-based models of semantic composition. In ACL, pp. 236–244, 2008. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advances in Neural Information Processing Systems, pp. 2204– 2212, 2014. Tsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. EACL, 2017a. Tsendsuren Munkhdalai and Hong Yu. Reasoning with memory augmented neural networks for language comprehension. ICLR, 2017b. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Who did what: A largescale person-centered cloze dataset. EMNLP, 2016. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. ICML (3), 28:1310–1318, 2013. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pp. 1532– 1543, 2014. URL http://www.aclweb.org/ anthology/D14-1162. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. Bidirectional attention flow for machine comprehension. ICLR, 2017. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284, 2016. Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems, pp. 2431–2439, 2015. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605. 02688. Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. EMNLP, 2016. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. ICLR, 2015. Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multiplicative integration with recurrent neural networks. Advances in Neural Information Processing Systems, 2016. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Learning multi-relational semantics using neural-embedding models. NIPS Workshop on Learning Semantics, 2014. Zhilin Yang, Ruslan Salakhutdinov, and William Cohen. Multi-task cross-lingual sequence tagging from scratch. arXiv preprint arXiv:1603.06270, 2016. A Implementation Details Our model was implemented using the Theano (Theano Development Team, 2016) and Lasagne5 Python libraries. We used stochastic gradient descent with ADAM updates for optimization, which combines classical momentum and adaptive gradients (Kingma & Ba, 2015). The batch size was 32 and the initial learning rate was 5 × 10−4 which was halved every epoch after the second epoch. The same setting is applied to all models and datasets. We also used gradient clipping with a threshold of 10 to stabilize GRU training (Pascanu et al., 2013). We set the number of layers K to be 3 for all experiments. The number of hidden units for the character GRU was set to 50. The remaining two hyperparameters—size of document and query GRUs, and dropout rate—were tuned on the validation set, and their optimal values are shown in Table 6. In general, the optimal GRU size increases and the dropout rate decreases as the corpus size increases. The word lookup table was initialized with 100d GloVe vectors6 (Pennington et al., 2014) and OOV tokens at test time were assigned unique random vectors. We empirically observed that initializing with pre-trained embeddings gives higher performance compared to random initialization for all 5https://lasagne.readthedocs.io/en/latest/ 6http://nlp.stanford.edu/projects/glove/ 1841 Table 5: Dataset statistics. CNN Daily Mail CBT-NE CBT-CN WDW-Strict WDW-Relaxed # train 380,298 879,450 108,719 120,769 127,786 185,978 # validation 3,924 64,835 2,000 2,000 10,000 10,000 # test 3,198 53,182 2,500 2,500 10,000 10,000 # vocab 118,497 208,045 53,063 53,185 347,406 308,602 max doc length 2,000 2,000 1,338 1,338 3,085 3,085 Table 6: Hyperparameter settings for each dataset. dim() indicates hidden state size of GRU. Hyperparameter CNN Daily Mail CBT-NE CBT-CN WDW-Strict WDW-Relaxed Dropout 0.2 0.1 0.4 0.4 0.3 0.3 dim( ←→ GRU∗) 256 256 128 128 128 128 datasets. Furthermore, for smaller datasets (WDW and CBT) we found that fixing these embeddings to their pretrained values led to higher test performance, possibly since it avoids overfitting. We do not use the character composition model for CNN and Daily Mail, since their entities (and hence candidate answers) are anonymized to generic tokens. For other datasets the character lookup table was randomly initialized with 25d vectors. All other parameters were initialized to their default values as specified in the Lasagne library. B Attention Plots 1842 Figure 4: Layer-wise attention visualization of GA Reader trained on WDW-Strict. See text for details. 1843 Figure 5: Layer-wise attention visualization of GA Reader trained on WDW-Strict. See text for details. 1844 Figure 6: Layer-wise attention visualization of GA Reader trained on WDW-Strict. See text for details. 1845 Figure 7: Layer-wise attention visualization of GA Reader trained on WDW-Strict. See text for details. 1846
2017
168
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1847–1856 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1169 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1847–1856 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1169 Determining Gains Acquired from Word Embedding Quantitatively Using Discrete Distribution Clustering Jianbo Ye?, Yanran Li†, Zhaohui Wu‡, James Z. Wang?, Wenjie Li† and Jia Li? ?The Pennsylvania State University, University Park, Pennsylvania †The Hong Kong Polytechnic University, Hong Kong ‡Microsoft Abstract Word embeddings have become widelyused in document analysis. While a large number of models for mapping words to vector spaces have been developed, it remains undetermined how much net gain can be achieved over traditional approaches based on bag-of-words. In this paper, we propose a new document clustering approach by combining any word embedding with a state-of-the-art algorithm for clustering empirical distributions. By using the Wasserstein distance between distributions, the word-to-word semantic relationship is taken into account in a principled way. The new clustering method is easy to use and consistently outperforms other methods on a variety of data sets. More importantly, the method provides an effective framework for determining when and how much word embeddings contribute to document analysis. Experimental results with multiple embedding models are reported. 1 Introduction Word embeddings (a.k.a. word vectors) have been broadly adopted for document analysis (Mikolov et al., 2013a,b). The embeddings can be trained from external large-scale corpus and then easily utilized for different data. To a certain degree, the knowledge mined from the corpus, possibly in very intricate ways, is coded in the vector space, Correspondence should be sent to J. Ye ([email protected]) and J. Li ([email protected]). The work was done when Z. Wu was with Penn State. the samples of which are easy to describe and ready for mathematical modeling. Despite the appeal, researchers will be interested in knowing how much gain an embedding can bring forth over the performance achievable by existing bag-ofwords based approaches. Moreover, how can the gain be quantified? Such a preliminary evaluation will be carried out before building a sophisticated pipeline of analysis. Almost every document analysis model used in practice is constructed assuming a certain basic representation—bag-of-words or word embeddings—for the sake of computational tractability. For example, after word embedding is done, high-level models in the embedded space, such as entity representations, similarity measures, data manifolds, hierarchical structures, language models, and neural architectures, are designed for various tasks. In order to invent or enhance analysis tools, we want to understand precisely the pros and cons of the highlevel models and the underlying representations. Because the model and the representation are tightly coupled in an analytical system, it is not easy to pinpoint where the gain or loss found in practice comes from. Should the gain be credited to the mechanism of the model or to the use of word embeddings? As our experiments demonstrate, introducing certain assumptions will make individual methods effective only if certain constraints are met. We will address this issue under an unsupervised learning framework. Our proposed clustering paradigm has several advantages. Instead of packing the information of a document into a fixed-length vector for subsequent analysis, we treat a document more thoroughly as a distributional entity. In our approach, the distance between two empirical 1847 nonparametric measures (or discrete distributions) over the word embedding space is defined as the Wasserstein metric (a.k.a. the Earth Mover’s Distance or EMD) (Wan, 2007; Kusner et al., 2015). Comparing with a vector representation, an empirical distribution can represent with higher fidelity a cloud of points such as words in a document mapped to a certain space. In the extreme case, the empirical distribution can be set directly as the cloud of points. In contrast, a vector representation reduces data significantly, and its effectiveness relies on the assumption that the discarded information is irrelevant or nonessential to later analysis. This simplification itself can cause degradation in performance, obscuring the inherent power of the word embedding space. Our approach is intuitive and robust. In addition to a high fidelity representation of the data, the Wasserstein distance takes into account the crossterm relationship between different words in a principled fashion. According to the definition, the distance between two documents A and B are the minimum cumulative cost that words from document A need to “travel” to match exactly the set of words for document B. Here, the travel cost of a path between two words is their (squared) Euclidean distance in the word embedding space. Therefore, how much benefit the Wasserstein distance brings also depends on how well the word embedding space captures the semantic difference between words. While Wasserstein distance is well suited for document analysis, a major obstacle of approaches based on this distance is the computational intensity, especially for the original D2-clustering method (Li and Wang, 2008). The main technical hurdle is to compute efficiently the Wasserstein barycenter, which is itself a discrete distribution, for a given set of discrete distributions. Thanks to the recent advances in the algorithms for solving Wasserstein barycenters (Cuturi and Doucet, 2014; Ye and Li, 2014; Benamou et al., 2015; Ye et al., 2017), one can now perform document clustering by directly treating them as empirical measures over a word embedding space. Although the computational cost is still higher than the usual vector-based clustering methods, we believe that the new clustering approach has reached a level of efficiency to justify its usage given how important it is to obtain high-quality clustering of unstructured text data. For instance, clustering is a crucial step performed ahead of cross-document co-reference resolution (Singh et al., 2011), document summarization, retrospective events detection, and opinion mining (Zhai et al., 2011). 1.1 Contributions Our work has two main contributions. First, we create a basic tool of document clustering, which is easy to use and scalable. The new method leverages the latest numerical toolbox developed for optimal transport. It achieves state-of-theart clustering performance across heterogeneous text data—an advantage over other methods in the literature. Second, the method enables us to quantitatively inspect how well a word-embedding model can fit the data and how much gain it can produce over the bag-of-words models. 2 Related Work In the original D2-clustering framework proposed by Li and Wang (2008), calculating Wasserstein barycenter involves solving a large-scale LP problem at each inner iteration, severely limiting the scalability and robustness of the framework. Such high magnitude of computations had prohibited it from deploying in many real-world applications until recently. To accelerate the computation of Wasserstein barycenter, and ultimately to improve D2-clustering, multiple numerical algorithmic efforts have been made in the recent few years (Cuturi and Doucet, 2014; Ye and Li, 2014; Benamou et al., 2015; Ye et al., 2017). Although the effectiveness of Wasserstein distance has been well recognized in the computer vision and multimedia literature, the property of Wasserstein barycenter has not been well understood. To our knowledge, there still lacks systematic study of applying Wasserstein barycenter and D2-clustering in document analysis with word embeddings. A closely related work by Kusner et al. (2015) connects the Wasserstein distance to the word embeddings for comparing documents. Our work differs from theirs in the methodology. We directly pursue a scalable clustering setting rather than construct a nearest neighbor graph based on calculated distances, because the calculation of the Wasserstein distances of all pairs is too expensive to be practical. Kusner et al. (2015) used a lower bound that was less costly to compute in order to prune unnecessary full distance calculation, but 1848 the scalability of this modified approach is still limited, an issue to be discussed in Section 4.3. On the other hand, our approach adopts the framework similar to the K-means which is of complexity O(n) per iteration and usually converges within just tens of iterations. The computation of D2clustering, though in its original form was magnitudes heavier than typical document clustering methods, can now be efficiently carried out with parallelization and proper implementations (Ye et al., 2017). 3 The Method This section introduces the distance, the D2clustering technique, the fast computation framework, and how they are used in the proposed document clustering method. 3.1 Wasserstein Distance Suppose we represent each document dk consisting mk unique words by a discrete measure or a discrete distribution, where k = 1, . . . , N with N being the sample size: dk = Xmk i=1 w(k) i δx(k) i . (1) Here δx denotes the Dirac measure with support x, and w(k) i ≥0 is the “importance weight” for the i-th word in the k-th document, with Pmk i=1 w(k) i = 1. And x(k) i 2 Rd, called a support point, is the semantic embedding vector of the i-th word. The 2nd-order Wasserstein distance between two documents d1 and d2 (and likewise for any document pairs) is defined by the following LP problem: W 2(d1, d2) := min ⇧ P i,j ⇡i,jkx(1) i −x(2) j k2 2 s.t. Pm2 j=1 ⇡i,j = wi, 8i, Pm1 i=1 ⇡i,j = wj, 8j ⇡i,j ≥0, 8i, j , (2) where ⇧= {⇡i,j} is a m1 ⇥m2 coupling matrix, and let {Ci,j := kx(1) i −x(2) j k2 2} be transportation costs between words. Wasserstein distance is a true metric (Villani, 2003) for measures, and its best exact algorithm has a complexity of O(m3 log m) (Orlin, 1993), if m1 = m2 = m. 3.2 Discrete Distribution (D2-) Clustering D2-clustering (Li and Wang, 2008) iterates between the assignment step and centroids updating step in a similar way as the Lloyd’s K-means. Suppose we are to find K clusters. The assignment step finds each member distribution its nearest mean from K candidates. The mean of each cluster is again a discrete distribution with m support points, denoted by ci, i = 1, . . . , K. Each mean is iteratively updated to minimize its total within cluster variation. We can write the D2clustering problem as follows: given sample data {dk}N k=1, support size of means m, and desired number of clusters K, D2-clustering solves min c1,...,cK XN k=1 min 1iK W 2(dk, ci) , (3) where c1, . . . , cK are Wasserstein barycenters. At the core of solving the above formulation is an optimization method that searches the Wasserstein barycenters of varying partitions. Therefore, we concentrate on the following problem. For each cluster, we reorganize the index of member distributions from 1, . . . , n. The Wasserstein barycenter (Agueh and Carlier, 2011; Cuturi and Doucet, 2014) is by definition the solution of min c Xn k=1 W 2(dk, c) , (4) where c = Pm i=1 wiδxi. The above Wasserstein barycenter formulation involves two levels of optimization: the outer level finding the minimizer of total variations, and the inner level solving Wasserstein distances. We remark that in D2clustering, we need to solve multiple Wasserstein barycenters rather than a single one. This constitutes the third level of optimization. 3.3 Modified Bregman ADMM for Computing Wasserstein Barycenter The recent modified Bregman alternating direction method of multiplier (B-ADMM) algorithm (Ye et al., 2017), motivated by the work by Wang and Banerjee (2014), is a practical choice for computing Wasserstein barycenters. We briefly sketch their algorithmic procedure of this optimization method here for the sake of completeness. To solve for Wasserstein barycenter defined in Eq. (4), the key procedure of the modified Bregman ADMM involves iterative updates of four block of primal variables: the support points of c — {xi}m i=1 (with transportation costs {Ci,j}(k) for k = 1, . . . , n), the importance weights of c — {wi}m i=1, and two sets of split matching variables — {⇡(k,1) i,j } and {⇡(k,2) i,j }, for k = 1, . . . , n, as well as Lagrangian variables {λ(k) i,j } for k = 1, . . . , n. 1849 In the end, both {⇡(k,1) i,j } and {⇡(k,2) i,j } converge to the matching weight in Eq. (2) with respect to d(c, dk). The iterative algorithm proceeds as follows until c converges or a maximum number of iterations are reached: given constant ⌧≥10, ⇢/ P i,j,k C(k) i,j Pn k=1 mkm and round-off tolerance ✏= 10−10, those variables are updated in the following order. Update {xi}m i=1 and {C(k) i,j } in every ⌧iterations: xi := 1 nwi Xn k=1 Xmk j=1 ⇡(k,1) i,j x(k) j , 8i, (5) C(k) i,j := kxi −x(k) j k2 2, 8i, j and k. (6) Update {⇡(k,1) i,j } and {⇡(k,2) i,j }. For each i, j and k, ⇡(k,2) i,j := ⇡(k,2) i,j exp −C(k) i,j −λ(k) i,j ⇢ ! + ✏, (7) ⇡(k,1) i,j := w(k) j ⇡(k,2) i,j . ⇣Xm l=1 ⇡(k,2) l,j ⌘ , (8) ⇡(k,1) i,j := ⇡(k,1) i,j exp ⇣ λ(k) i,j /⇢ ⌘ + ✏. (9) Update {wi}m i=1. For i = 1, . . . , m , wi := n X k=1 Pmk j=1 ⇡(k,1) i,j P i,j ⇡(k,1) i,j , (10) wi := wi . ⇣Xm i=1 wi ⌘ . (11) Update {⇡(k,2) i,j } and {λ(k) i,j }. For each i, j and k, ⇡(k,2) i,j := wi⇡(k,1) i,j . ⇣Xmk l=1 ⇡(k,1) i,l ⌘ , (12) λ(k) i,j := λ(k) i,j + ⇢ ⇣ ⇡(k,1) i,l −⇡(k,2) i,l ⌘ . (13) Eq. (5)-(13) can all be vectorized as very efficient numerical routines. In a data parallel implementation, only Eq. (5) and Eq. (10) (involving Pn k=1) needs to be synchronized. The software package detailed in (Ye et al., 2017) was used to generate relevant experiments. We make available our codes and pre-processed datasets for reproducing all experiments of our approach. 4 Experimental Results 4.1 Datasets and Evaluation Metrics We prepare six datasets to conduct a set of experiments. Two short-text datasets are created as follows. (D1) BBCNews abstract: We concatenate the title and the first sentence of news posts from BBCNews dataset1 to create an abstract version. (D2) Wiki events: Each cluster/class contains a set of news abstracts on the same story such as “2014 Crimean Crisis” crawled from Wikipedia current events following (Wu et al., 2015); this dataset offers more challenges because it has more finegrained classes and fewer documents (with shorter length) per class than the others. It also shows more realistic nature of applications such as news event clustering. We also experiment with two long-text datasets and two domain-specific text datasets. (D3) Reuters-21578: We obtain the original Reuters-21578 text dataset and process as follows: remove documents with multiple categories, remove documents with empty body, remove duplicates, and select documents from the largest ten categories. Reuters dataset is a highly unbalanced dataset (the top category has more than 3,000 documents while the 10-th category has fewer than 100). This imbalance induces some extra randomness in comparing the results. (D4) 20Newsgroups “bydate” version: We obtain the raw “bydate” version and process them as follows: remove headers and footers, remove URLs and Email addresses, delete documents with less than ten words. 20Newsgroups have roughly comparable sizes of categories. (D5) BBCSports. (D6) Ohsumed and Ohsumed-full: Documents are medical abstracts from the MeSH categories of the year 1991. Specifically, there are 23 cardiovascular diseases categories. Evaluating clustering results is known to be nontrivial. We use the following three sets of quantitative metrics to assess the quality of clusters by knowing the ground truth categorical labels of documents: (i) Homogeneity, Completeness, and V-measure (Rosenberg and Hirschberg, 2007); (ii) Adjusted Mutual Information (AMI) (Vinh et al., 2010); and (iii) Adjusted Rand Index (ARI) (Rand, 1971). For sensitivity analysis, we use the homogeneity score (Rosenberg and Hirschberg, 2007) as a projection dimension of other metrics, creating a 2D plot to visualize the metrics of a method along different homogeneity levels. Generally speaking, more clusters leads to higher homogeneity by chance. 1BBCNews and BBCSport are downloaded from http://mlg.ucd.ie/datasets/bbc.html 1850 4.2 Methods in Comparison We examine four categories of methods that assume a vector-space model for documents, and compare them to our D2-clustering framework. When needed, we use K-means++ to obtain clusters from dimension reduced vectors. To diminish the randomness brought by K-mean initialization, we ensemble the clustering results of 50 repeated runs (Strehl and Ghosh, 2003), and report the metrics for the ensembled one. The largest possible vocabulary used, excluding word embedding based approaches, is composed of words appearing in at least two documents. On each dataset, we select the same set of Ks, the number of clusters, for all methods. Typically, Ks are chosen around the number of ground truth categories in logarithmic scale. We prepare two versions of the TF-IDF vectors as the unigram model. The ensembled K-means methods are used to obtain clusters. (1) TF-IDF vector (Sparck Jones, 1972). (2) TF-IDF-N vector is found by choosing the most frequent N words in a corpus, where N 2 {500, 1000, 1500, 2000}. The difference between the two methods highlights the sensitivity issue brought by the size of chosen vocabulary. We also compare our approach with the following seven additional baselines. They are (3) Spectral Clustering (Laplacian), (4) Latent Semantic Indexing (LSI) (Deerwester et al., 1990), (5) Locality Preserving Projection (LPP) (He and Niyogi, 2004; Cai et al., 2005), (6) Nonnegative Matrix Factorization (NMF) (Lee and Seung, 1999; Xu et al., 2003), (7) Latent Dirichlet Allocation (LDA) (Blei et al., 2003; Hoffman et al., 2010), (8) Average of word vectors (AvgDoc), and (9) Paragraph Vectors (PV) (Le and Mikolov, 2014). Details on their experimental setups and hyper-parameter search strategies can be found in the Appendix. 4.3 Runtime We report the runtime for our approach on two largest datasets. The experiments regarding other smaller datasets all terminate within minutes in a single machine, which we omit due to space limitation. Like K-means, the runtime by our approach depends on the number of actual iterations before a termination criterion is met. In the Newsgroups dataset, with m = 100 and K = 45, the time per iteration is 121 seconds on 48 processors. In Reuters dataset, with m = 100 and K = 20, the time per iteration is 190 seconds on 24 processors. Each run terminates in around tens of iterations typically, upon which the percentage of label changes is less than 0.1%. Our approach adopts the Elkan’s algorithm (2003) pruning unnecessary computations of Wasserstein distance in assignment steps of K-means. For the Newsgroups data (with m = 100 and K = 45), our approach terminates in 36 iterations, and totally computes 12, 162, 717 (⇡3.5% ⇥186122) distance pairs in assignment steps, saving 60% (⇡1− 12,162,717 36⇥45⇥18612) distance pairs to calculate in the standard D2clustering. In comparison, the clustering approaches based on K-nearest neighbor (KNN) graph with the prefetch-and-prune method of (Kusner et al., 2015) needs substantially more pairs to compute Wasserstein distance, meanwhile the speed-ups also suffer from the curse of dimensionality. Their detailed statistics are reported in Table 1. Based on the results, our approach is much more practical as a basic document clustering tool. Method EMD counts (%) Our approach d = 400, K = 10 2.0 Our approach d = 400, K = 40 3.5 KNN d = 400, K = 1 73.9 KNN d = 100, K = 1 53.0 KNN d = 50, K = 1 23.4 Table 1: Percentage of total 186122 Wasserstein distance pairs needed to compute on the full Newsgroup dataset. The KNN graph based on 1st order Wasserstein distance is computed from the prefetch-and-prune approach according to (Kusner et al., 2015). 4.4 Results We now summarize our numerical results. Regular text datasets. The first four datasets in Table 2 cover quite general and broad topics. We consider them to be regular and representative datasets encountered more frequently in applications. We report the clustering performances of the ten methods in Fig. 1, where three different metrics are plotted against the clustering homogeneity. The higher result at the same level of homogeneity is better, and the ability to achieve higher homogeneity is also welcomed. Clearly, D2-clustering is the only method that shows ro1851 Figure 1: The quantitative cluster metrics used for performance evaluation of “BBC title and abstract”, “Wiki events”, “Reuters”, and “Newsgroups” (row-wise, from top to down). Y-axis corresponds to AMI, ARI, and Completeness, respective (column-wise, from left to right). X-axis corresponds to Homogeneity for sensitivity analysis. bustly superior performances among all ten methods. Specifically, it ranks first in three datasets, and second in the other one. In comparison, LDA performs competitively on the “Reuters” dataset, but is substantially unsuccessful on others. Meanwhile, LPP performs competitively on the “Wiki events” and “Newsgroups” datasets, but it underperforms on the other two. Laplacian, LSI, and Tfidf-N can achieve comparably performance if their reduced dimensions are fine tuned, which 1852 Dataset size class length est. #voc. BBCNews abstr. 2,225 5 26 7,452 Wiki events 1,983 54 22 5,313 Reuters 7,316 10 141 27,792 Newgroups 18,612 20 245 55,970 BBCSports 737 5 345 13,105 Ohsumed 4,340 23 Ohsumed-full⇤ 34,386 23 184 43,895 Table 2: Description of corpus data that have been used in our experiments. ⇤Ohsumed-full dataset is used for pre-training word embeddings only. Ohsumed is a downsampled evaluation set resulting from removing posts from Ohsumed-full that belong to multiple categories. unfortunately is unrealistic in practice. NMF is a simple and effective method which always gives stable, though subpar, performance. Short texts vs. long texts. D2-clustering performs much more impressively on short texts (“BBC abstract” and “Wiki events”) than it does on long texts (“Reuters” and “Newsgroups”). This outcome is somewhat expected, because the bagof-words method suffers from high sparsity for short texts, and word-embedding based methods in theory should have an edge here. As shown in Fig. 1, D2-clustering has indeed outperformed other non-embedding approaches by a large margin on short texts (improved by about 40% and 20% respectively). Nevertheless, we find lifting from word embedding to document clustering is not without a cost. Neither AvgDoc nor PV can perform as competitively as D2-clustering performs on both. Domain-specific text datasets. We are also interested in how word embedding can help group domain-specific texts into clusters. In particular, does the semantic knowledge “embedded” in words provides enough clues to discriminate fine-grained concepts? We report the best AMI achieved by each method in Table 3. Our preliminary result indicates state-of-the-art word embeddings do not provide enough gain here to exceed the performance of existing methodologies. On the unchallenging one, the “BBCSport” dataset, basic bag-of-words approaches (Tfidf and Tfidf-N) already suffice to discriminate different sport categories; and on the difficult one, the “Ohsumed” dataset, D2-clustering only slightly improves over Tfidf and others, ranking behind LPP. Meanwhile, we feel the overall quality of clustering “Ohsumed” texts is quite far from useful in practice, no matter which method to use. More discussions will be provided next. 4.5 Sensitivity to Word Embeddings. We validate the robustness of D2-clustering with different word embedding models, and we also show all their results in Fig. 2. As we mentioned, the effectiveness of Wasserstein document clustering depends on how relevant the utilized word embeddings are with the tasks. In those general document clustering tasks, however, word embedding models trained on general corpus perform robustly well with acceptably small variations. This outcome reveals our framework as generally effective and not dependent on a specific word embedding model. In addition, we also conduct experiments with word embeddings with smaller dimensions, at 50 and 100. Their results are not as good as those we have reported (therefore detailed numbers are not included due to space limitation). Inadequate embeddings may not be disastrous. In addition to our standard running set, we also used D2-clustering with purely random word embeddings, meaning each word vector is independently sampled from spherical Gaussian at 300 dimension, to see how deficient it can be. Experimental results show that random word embeddings degrade the performance of D2-clustering, but it still performs much better than purely random clustering, and is even consistently better than LDA. Its performances across different datasets is highly correlated with the bag-of-words (Tfidf and Tfidf-N). By comparing a pre-trained word embedding model to a randomly generated one, we find that the extra gain is significant (> 10%) in clustering four of the six datasets. Their detailed statistics are in Table 4 and Fig. 3. 5 Discussions Performance advantage. There has been one immediate observation from these studies, D2clustering always outperforms two of its degenerated cases, namely Tf-idf and AvgDoc, and three other popular methods: LDA, NMF, and PV, on all tasks. Therefore, for document clustering, users can expect to gain performance improvements by using our approach. Clustering sensitivity. From the four 2D plots in Fig. 1, we notice that the results of Laplacian, 1853 regular dataset domain-specific dataset BBCNews abstract Wik events Reuters Newsgroups BBCSport Ohsumed Avg. Tfidf-N 0.389 0.448 0.470 0.388 0.883 0.210 0.465 Tfidf 0.376 0.446 0.456 0.417 0.799 0.235 0.455 Laplacian 0.538 0.395 0.448 0.385 0.855 0.223 0.474 LSI 0.454 0.379 0.400 0.398 0.840 0.222 0.448 LPP 0.521 0.462 0.426 0.515 0.859 0.284 0.511 NMF 0.537 0.395 0.438 0.453 0.809 0.226 0.476 LDA 0.151 0.280 0.503 0.288 0.616 0.132 0.328 AvgDoc 0.753 0.312 0.413 0.376 0.504 0.172 0.422 PV 0.428 0.289 0.471 0.275 0.553 0.233 0.375 D2C (Our approach) 0.759 0.545 0.534 0.493 0.812 0.260 0.567 Table 3: Best AMIs (Vinh et al., 2010) of compared methods on different datasets and their averaging. The best results are marked in bold font for each dataset, the 2nd and 3rd are marked by blue and magenta colors respectively. Figure 2: Sensitivity analysis: the clustering performances of D2C under different word embeddings. Left: Reuters, Right: Newsgroups. An extra evaluation index (CCD (Zhou et al., 2005)) is also used. ARI AMI V-measure BBCNews .146 .187 .190 abstract .792+442% .759+306% .762+301% Wiki events .194 .369 .463 .277+43% .545+48% .611+32% Reuters .498 .524 .588 .515+3% .534+2% .594+1% Newsgroups .194 .358 .390 .305+57% .493+38% .499+28% BBCSport .755 .740 .760 .801+6% .812+10% .817+8% Ohsumed .080 .204 .292 .116+45% .260+27% .349+20% Table 4: Comparison between random word embeddings (upper row) and meaningful pre-trained word embeddings (lower row), based on their best ARI, AMI, and V-measures. The improvements by percentiles are also shown in the subscripts. LSI and Tfidf-N are rather sensitive to their extra hyper-parameters. Once the vocabulary 25% 75% 68% 32% 98% 2% 73% 27% 91% 9% 78% 22% Figure 3: Pie charts of clustering gains in AMI calculated from our framework. Light region is by bag-of-words, and dark region is by pre-trained word embeddings. Six datasets (from left to right): BBCNews abstract, Wiki events, Reuters, Newsgroups, BBCSport, and Ohsumed. set, weight scheme and embeddings of words are fixed, our framework involves only two additional hyper-parameters: the number of intended clusters, K, and the selected support size of centroid distributions, m. We have chosen more than one m in all related experiments (m = {64, 100} for long documents, and m = {10, 20} for short documents). Our empirical experiments show that the effect of m on different metrics is less 1854 sensitive than the change of K. Results at different K are plotted for each method (Fig. 1). The gray dots denote results of multiple runs of D2clustering. They are always contracted around the top-right region of the whole population, revealing the predictive and robustly supreme performance. When bag-of-words suffices. Among the results of “BBCSport” dataset, Tfidf-N shows that by restricting the vocabulary set into a smaller one (which may be more relevant to the interest of tasks), it already can achieve highest clustering AMI without any other techniques. Other unsupervised regularization over data is likely unnecessary, or even degrades the performance slightly. Toward better word embeddings. Our experiments on the Ohsumed dataset have been limited. The result shows that it could be highly desirable to incorporate certain domain knowledge to derive more effective vector embeddings of words and phrases to encode their domain-specific knowledge, such as jargons that have knowledge dependencies and hierarchies in educational data mining, and signal words that capture multidimensional aspects of emotions in sentiment analysis. Finally, we report the best AMIs of all methods on all datasets in Table 3. By looking at each method and the average of best AMIs over six datasets, we find our proposed clustering framework often performs competitively and robustly, which is the only method reaching more than 90% of the best AMI on each dataset. Furthermore, this observation holds for varying lengths of documents and varying difficulty levels of clustering tasks. 6 Conclusions and Future Work This paper introduces a nonparametric clustering framework for document analysis. Its computational tractability, robustness and supreme performance, as a fundamental tool, are empirically validated. Its ease of use enables data scientists to apply it for the pre-screening purpose of examining word embeddings in a specific task. Finally, the gains acquired from word embeddings are quantitatively measured from a nonparametric unsupervised perspective. It would also be interesting to investigate several possible extensions to the current clustering work. One direction is to learn a proper ground distance for word embeddings such that the final document clustering performance can be improved with labeled data. The work by (Huang et al., 2016; Cuturi and Avis, 2014) have partly touched this goal with an emphasis on document proximities. A more appealing direction is to develop problem-driven methods to represent a document as a distributional entity, taking into consideration of phrases, sentence structures, and syntactical characteristics. We believe the framework of Wasserstein distance and D2-clustering creates room for further investigation on complex structures and knowledge carried by documents. Acknowledgments This material is based upon work supported by the National Science Foundation under Grant Nos. ECCS-1462230, DMS-1521092, and Research Grants Council of Hong Kong under Grant No. PolyU 152094/14E. The primary computational infrastructures used were supported by the Foundation under Grant Nos. ACI-0821527 (CyberStar) and ACI-1053575 (XSEDE). References Martial Agueh and Guillaume Carlier. 2011. Barycenters in the Wasserstein space. SIAM J. Math. Analysis 43(2):904–924. Mikhail Belkin and Partha Niyogi. 2001. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems (NIPS). volume 14, pages 585– 591. Jean-David Benamou, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyr´e. 2015. Iterative Bregman projections for regularized transportation problems. SIAM J. Sci. Computing (SJSC) 37(2):A1111–A1138. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. J. Machine Learning Research (JMLR) 3:993–1022. Deng Cai, Xiaofei He, and Jiawei Han. 2005. Document clustering using locality preserving indexing. Trans. Knowledge and Data Engineering (TKDE) 17(12):1624–1637. Marco Cuturi and David Avis. 2014. Ground metric learning. Journal of Machine Learning Research 15(1):533–564. Marco Cuturi and Arnaud Doucet. 2014. Fast computation of Wasserstein barycenters. In Int. Conf. Machine Learning (ICML). pages 685–693. 1855 Scott C. Deerwester, Susan T Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. J. American Soc. Information Science 41(6):391–407. Charles Elkan. 2003. Using the triangle inequality to accelerate k-means. In Int. Conf. Machine Learning (ICML). volume 3, pages 147–153. Xiaofei He and Partha Niyogi. 2004. Locality preserving projections. In Advances in Neural Information Processing Systems (NIPS). MIT, volume 16, page 153. Matthew Hoffman, Francis R Bach, and David M Blei. 2010. Online learning for latent Dirichlet allocation. In Advances in Neural Information Processing Systems (NIPS). pages 856–864. Gao Huang, Chuan Guo, Matt J Kusner, Yu Sun, Fei Sha, and Kilian Q Weinberger. 2016. Supervised word mover’s distance. In Advances in Neural Information Processing Systems (NIPS). pages 4862– 4870. Matt J Kusner, Yu Sun, Nicholas N I. Kolkin, and K Q. Weinberger. 2015. From word embeddings to document distances. In Int. Conf. Machine Learning (ICML). Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Int. Conf. Machine Learning. pages 1188–1196. Daniel D Lee and H Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factorization. Nature 401(6755):788–791. Jia Li and James Z Wang. 2008. Real-time computerized annotation of pictures. Trans. Pattern Analysis and Machine Intelligence (PAMI) 30(6):985–1002. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (NIPS). pages 3111–3119. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In HLT-NAACL. pages 746– 751. James B Orlin. 1993. A faster strongly polynomial minimum cost flow algorithm. Operations research 41(2):338–350. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). volume 14, pages 1532–1543. William M Rand. 1971. Objective criteria for the evaluation of clustering methods. J. American Statistical Association 66(336):846–850. Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A conditional entropy-based external cluster evaluation measure. In EMNLP-CoNLL. volume 7, pages 410–420. Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2011. Largescale cross-document coreference using distributed inference and hierarchical models. In ACL-HLT. Association for Computational Linguistics, pages 793–803. Karen Sparck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. J. Documentation 28(1):11–21. Alexander Strehl and Joydeep Ghosh. 2003. Cluster ensembles—a knowledge reuse framework for combining multiple partitions. J. Machine Learning Research (JMLR) 3:583–617. C´edric Villani. 2003. Topics in optimal transportation. 58. American Mathematical Soc. Nguyen Xuan Vinh, Julien Epps, and James Bailey. 2010. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. J. Machine Learning Research (JMLR) 11:2837–2854. Xiaojun Wan. 2007. A novel document similarity measure based on earth movers distance. Information Sciences 177(18):3718–3730. Huahua Wang and Arindam Banerjee. 2014. Bregman alternating direction method of multipliers. In Advances in Neural Information Processing Systems (NIPS). pages 2816–2824. Zhaohui Wu, Chen Liang, and C Lee Giles. 2015. Storybase: Towards building a knowledge base for news events. In ACL-IJCNLP 2015. pages 133–138. Wei Xu, Xin Liu, and Yihong Gong. 2003. Document clustering based on non-negative matrix factorization. In ACM SIGIR Conf. on Research and Development in Informaion Retrieval. ACM, pages 267–273. Jianbo Ye and Jia Li. 2014. Scaling up discrete distribution clustering using admm. In Int. Conf. Image Processing (ICIP). IEEE, pages 5267–5271. Jianbo Ye, Panruo Wu, James Z. Wang, and Jia Li. 2017. Fast discrete distribution clustering using Wasserstein barycenter with sparse support. IEEE Trans. on Signal Processing (TSP) 65(9):2317– 2332. Zhongwu Zhai, Bing Liu, Hua Xu, and Peifa Jia. 2011. Clustering product features for opinion mining. In Int. Conf. on Web Search and Data Mining (WSDM). ACM, pages 347–354. Ding Zhou, Jia Li, and Hongyuan Zha. 2005. A new mallows distance based metric for comparing clusterings. In Int. Conf. Machine Learning (ICML). ACM, pages 1028–1035. 1856
2017
169
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 179–188 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1017 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 179–188 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1017 Creating Training Corpora for NLG Micro-Planning Claire Gardent Anastasia Shimorina CNRS, LORIA, UMR 7503 Vandoeuvre-l`es-Nancy, F-54500, France {claire.gardent,anastasia.shimorina}@loria.fr Shashi Narayan Laura Perez-Beltrachini School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh, EH8 9AB, UK {shashi.narayan,lperez}@ed.ac.uk Abstract In this paper, we present a novel framework for semi-automatically creating linguistically challenging microplanning data-to-text corpora from existing Knowledge Bases. Because our method pairs data of varying size and shape with texts ranging from simple clauses to short texts, a dataset created using this framework provides a challenging benchmark for microplanning. Another feature of this framework is that it can be applied to any large scale knowledge base and can therefore be used to train and learn KB verbalisers. We apply our framework to DBpedia data and compare the resulting dataset with Wen et al. (2016)’s. We show that while Wen et al.’s dataset is more than twice larger than ours, it is less diverse both in terms of input and in terms of text. We thus propose our corpus generation framework as a novel method for creating challenging data sets from which NLG models can be learned which are capable of handling the complex interactions occurring during in micro-planning between lexicalisation, aggregation, surface realisation, referring expression generation and sentence segmentation. To encourage researchers to take up this challenge, we recently made available a dataset created using this framework in the context of the WEBNLG shared task. 1 Introduction To train Natural Language Generation (NLG) systems, various input-text corpora have been developed which associate (numerical, formal, linguistic) input with text. As discussed in detail in Section 2, these corpora can be classified into three main types namely, (i) domain specific corpora, (ii) benchmarks constructed from “Expert” Linguistic Annotations and (iii) crowdsourced benchmarks.1 In this paper, we focus on how to create datato-text corpora which can support the learning of micro-planners i.e., data-to-text generation systems that can handle the complex interactions occurring between lexicalisation (mapping data to words), aggregation (exploiting linguistic constructs such as ellipsis and coordination to avoid repetition), surface realisation (using the appropriate syntactic constructs to build sentences), sentence segmentation and referring expression generation. We start by reviewing the main existing types of NLG benchmarks and we argue for a crowdsourcing approach in which (i) data units are automatically built from an existing Knowledge Base (KB) and (ii) text is crowdsourced from the data (Section 2). We then propose a generic framework for semi-automatically creating training corpora for NLG (Section 3) from existing knowledge bases. In Section 4, we apply this framework to DBpedia data and we compare the resulting dataset with the dataset of Wen et al. (2016) using various metrics to evaluate the linguistic and computational adequacy of both datasets. By applying these metrics, we show that while Wen et al.’s dataset is more than twice larger than ours, it is less diverse both in terms of input and in terms of text. We also com1We ignore here (Lebret et al., 2016)’s dataset which was created fully automatically from Wikipedia by associating infoboxes with text because this dataset fails to ensure an adequate match between data and text. We manually examined 50 input/output pairs randomly extracted from this dataset and did not find a single example where data and text matched. As such, this dataset is ill-suited for training microplanners. Moreover, since its texts contain both missing and additional information, it cannot be used to train joint models for content selection and micro-planning either. 179 pare the performance of a sequence-to-sequence model (Vinyals et al., 2015) on both datasets to estimate the complexity of the learning task induced by each dataset. We show that the performance of this neural model is much lower on the new data set than on the existing ones. We thus propose our corpus generation framework as a novel method for creating challenging data sets from which NLG models can be learned which are capable of generating complex texts from KB data. 2 NLG Benchmarks Domain specific benchmarks. Several domain specific data-text corpora have been built by researchers to train and evaluate NLG systems. In the sports domain, Chen and Mooney (2008) constructed a dataset mapping soccer games events to text which consists of 1,539 data-text pairs and a vocabulary of 214 words. For weather forecast generation, the dataset of Liang et al. (2009) includes 29,528 data-text pairs with a vocabulary of 345 words. For the air travel domain, Ratnaparkhi (2000) created a dataset consisting of 5,426 datatext pairs with a richer vocabulary (927 words) and in the biology domain, the KBGen shared task (Banik et al., 2013) made available 284 data-text pairs where the data was extracted from an existing knowledge base and the text was authored by biology experts. An important limitation of these datasets is that, because they are domain specific, systems learned from them are restricted to generating domain specific, often strongly stereotyped text (e.g., weather forecast or soccer game commentator reports). Arguably, training corpora for NLG should support the learning of more generic systems capable of handling a much wider range of linguistic interactions than is present in stereotyped texts. By nature however, domain specific corpora restrict the lexical and often the syntactic coverage of the texts to be produced and thereby indirectly limit the expressivity of the generators trained on them. Benchmarks constructed from “expert” linguistic annotations. NLG benchmarks have also been proposed where the input data is either derived from dependency parse trees (SR’11 task, Belz et al. 2011) or constructed through manual annotation (AMR Corpus, Banarescu et al. 2012). Contrary to the domain-specific data sets just mentioned, these corpora have a wider coverage and are large enough for training systems that can generate linguistically sophisticated text. One main drawback of these benchmarks however is that their construction required massive manual annotation of text with complex linguistic structures (parse trees for the SR task and Abstract Meaning Representation for the AMR corpus). Moreover because these structures are complex, the annotation must be done by experts. It cannot be delegated to the crowd. In short, the creation of such benchmark is costly both in terms of time and in terms of expertise. Another drawback is that, because the input representation derived from a text is relatively close to its surface form2, the NLG task is mostly restricted to surface realisation (mapping input to sentences). That is, these benchmarks give very limited support for learning models that can handle the interactions between micro-planning subtasks. Crowdsourced benchmarks. More recently, data-to-text benchmarks have also been created by associating data units with text using crowdsourcing. Wen et al. (2016) first created data by enumerating all possible combinations of 14 dialog act types (e.g., request, inform) and attribute-value pairs present in four small-size, hand-written ontologies about TVs, laptops, restaurants and hotels. They then use crowdsourcing to associate each data unit with a text. The resulting dataset is both large and varied (4 domains) and was successfully exploited to train neural and imitation learning data-to-text generator (Wen et al., 2016; Lampouras and Vlachos, 2016). Similarly, Novikova and Rieser (2016) described a framework for collecting data-text pairs using automatic quality control measures and evaluating how the type of the input representations (text vs pictures) impacts the quality of crowdsourced text. The crowdsourcing approach to creating inputtext corpora has several advantages. First, it is low cost in that the data is produced automatically and the text is authored by a crowdworker. This is in stark contrast with the previous approach where expert linguists are required to align text with data. 2For instance, the input structures made available by the shallow track of the SR task contain all the lemmas present in the corresponding text. In this case, the generation task is limited to determining (i) the linear ordering and (ii) the full form of the word in the input. 180 Second, because the text is crowd-sourced from the data (rather than the other way round), there is an adequate match between text and data both semantically (the text expresses the information contained in the data) and computationally (the data is sufficiently different from the text to require the learning of complex generation operations such as sentence segmentation, aggregation and referring expression generation). Third, by exploiting small hand-written ontologies to quickly construct meaningful artificial data, the crowdsourcing approach allows for the easy creation of a large dataset with data units of various size and bearing on different domains. This, in turn, allows for better linguistic coverage and for NLG tasks of various complexity since typically, inputs of larger size increases the need for complex microplanning operations. 3 The WebNLG Framework for Creating Data-to-Text, Micro-Planning Benchmarks While as just noted, the crowdsourcing approach presented by Wen et al. (2016) has several advantages, it also has a number of shortcomings. One important drawback is that it builds on artificial rather than “real” data i.e., data that would be extracted from an existing knowledge base. As a result, the training corpora built using this method cannot be used to train KB verbalisers i.e., generation systems that can verbalise KB fragments. Another limitation concerns the shape of the input data. Wen et al.’s data can be viewed as trees of depth one (a set of attributes-value pairs describing a single entity e.g., a restaurant or a laptop). As illustrated in Figure 1 however, there is a strong correlation between the shape of the input and the syntactic structure of the corresponding sentence. The path structure T1 where B is shared by two predicates (mission and operator) will favour the use of a participial or a passive subject relative clause. In contrast, the branching structure T2 will favour the use of a new clause with a pronominal subject or a coordinated VP. More generally, allowing for trees of deeper depth is necessary to indirectly promote the introduction in the benchmark of a more varied set of syntactic constructs to be learned by generators. To address these issues, we introduce a novel method for creating data-to-text corpora from large knowledge bases such as DBPedia. Our T1 A B C mission operator S1.1 A participated in mission B operated by C. S1.2 A participated in mission B which was operated by C. T2 A D E occupation birthPlace S2.1 A was born in E. She worked as an engineer. S2.2 A was born in E and worked as an engineer. Figure 1: Input shape and linguistic structures (A = Susan Helms, B = STS 78, C = NASA, D = engineer, E = Charlotte, North Carolina). method combines (i) a content selection module designed to extract varied, relevant and coherent data units from DBPedia with (ii) a crowdsourcing process for associating data units with human authored texts that correctly capture their meaning. Example 1 shows a data/text unit created by our method using DBPedia as input KB. (1) a. (John E Blaha birthDate 1942 08 26) (John E Blaha birthPlace San Antonio) (John E Blaha occupation Fighter pilot) b. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot Our method has the following features. First, it can be used to create a data-to-text corpus from any knowledge base where entities are categorised and there is a large number of entities belonging to the same category. As noted above, this means that the resulting corpus can be used to train KB verbalisers i.e., generators that are able to verbalise fragments of existing knowledge bases. It could be used for instance, to verbalise fragments of e.g., MusicBrainz3, FOAF4 or LinkedGeoData.5 Second, as crowdworkers are required to enter text that matches the data and a majority vote validation process is used to eliminate mis-matched pairs, there is a direct match between text and data. This allows for a clear focus on the non content selection part of generation known as microplanning. Third, because data of increasing size is matched with texts ranging from simple clauses to 3https://musicbrainz.org/ 4http://www.foaf-project.org/ 5http://linkedgeodata.org/ 181 Figure 2: Extracting data units from DBPedia. short texts consisting of several sentences, the resulting benchmark is appropriate for exercising the main subtasks of microplanning. For instance, in Example (1) above, given the input shown in (1a), generating (1b) involves lexicalising the occupation property as the phrase worked as (lexicalisation); using PP coordination (born in San Antonio on 1942-08-26) to avoid repeating the word born (aggregation); and verbalising the three triples using a single complex sentence including an apposition, a PP coordination and a transitive verb construction (sentence segmentation and surface realisation). 3.1 DBPedia To illustrate the functioning of our benchmark creation framework, we apply it to DBPedia. DBPedia is a multilingual knowledge base that was built from various kinds of structured information contained in Wikipedia (Mendes et al., 2012). This data is stored as RDF (Resource Description Format) triples of the form (subject, property, object) where the subject is a URI (Uniform Resource Identifier), the property is a binary relation and the object is either a URI or a literal value such as a string, a date or a number. We use an English version of the DBPedia knowledge base which encompasses 6.2M entities, 739 classes, 1,099 properties with reference values and 1,596 properties with typed literal values.6 3.2 Selecting Content To create data units, we adapted the procedure outlined by Perez-Beltrachini et al. (2016) and sketched in Figure 2. This method can be summarised as follows. First, DBPedia category graphs are extracted from DBPedia by retrieving up to 500 entity graphs for entities of the same category.7 For example, we build a category graph for the Astronaut category by collecting, graphs of depth five for 500 entities of types astronaut. Next, category graphs are used to learn bi-gram models of DBPedia properties which specify the probability of two properties co-occuring together. Three types of bi-gram models are extracted from category graphs using the SRILM toolkit (Stolcke, 2002): one model (S-Model) for bigrams occurring in sibling triples (triples with a shared subject); one model (C-Model) for bigrams occurring in chained triples (the object of one triple is the subject of the other); and one model (M-Model) which is a linear interpolation of the sibling and the chain model. The intuition is that these sib6http://wiki.dbpedia.org/ dbpedia-dataset-version-2015-10 7An entity graph for some entity e is a graph obtained by traversing the DBPedia graph starting in e and stopping at depth five. 182 ling and chain models capture different types of coherence, namely, topic-based coherence for the S-Model and discourse-based coherence for the CModel. Finally, the content selection task is formulated as an Integer Linear Programming (ILP) problem to select, for a given entity of category C and its entity graph Ge, subtrees of Ge with maximal bigram probability and varying size (between 1 and 7 RDF triples). Category A B M U S W #Inputs 663 1220 333 508 1137 1207 #I. Patterns 546 369 300 432 184 277 #Properties 38 46 30 41 32 50 #Entities 74 278 47 75 264 224 Table 1: Data statistics from content selection (A:Astronaut, B:Building, M:Monument, U:University, W:Written work, S:Sports team). We applied this content selection procedure to the DBPedia categories Astronaut (A), Building (B), Monument (M), University (U), Sports team (S) and Written work (W), using the three bi-gram models (S-Model, C-Model, M-Model) and making the number of triples required by the ILP constraint to occur in the output solutions vary between 1 and 7. The results are shown in Table 1. An input is a set of triples produced by the content selection module. The number of input (#Inputs) is thus the number of distinct sets of triples produced by this module. In contrast, input patterns are inputs where subject and object have been abstracted over. That is, the number of input patterns (#I. Patterns) is the number of distinct sets of properties present in the set of inputs. The number of properties (#Properties) is the number of distinct RDF properties occurring in the dataset. Similarly, the number of entities (#Entities) is the number of distinct RDF subjects and objects occurring in each given dataset. 3.3 Associating Content with Text We associate data with text using the Crowdflower platform.8 We do this in four main steps as follows. 1. Clarifying properties. One difficulty when collecting texts verbalising sets of DBPedia triples is that the meaning of DBPedia properties may be unclear. We therefore first manually clarified 8http://www.crowdflower.com for each category being worked on, those properties which have no obvious lexicalisations (e.g., crew1up was replaced by commander). 2. Getting verbalisations for single triples. Next, we collected three verbalisations for data units of size one, i.e. single triples consisting of a subject, a property and an object. For each such input, crowdworkers were asked to produce a sentence verbalising its content. We used both a priori automatic checks to prevent spamming and a posteriori manual checks to remove incorrect verbalisations. We also monitored crowdworkers as they entered their input and banned those who tried to circumvent our instructions and validators. The automatic checks comprise 12 custom javascript validators implemented in the CrowdFlower platform to block contributor answers which fail to meet requirements such as the minimal time a contributor should stay on page, the minimal length of the text produced, the minimal match of tokens between a triple and its verbalisation and various format restrictions used to detect invalid input. The exact match between a triple and its verbalisation was also prohibited. In addition, after data collection was completed, we manually checked each data-text pair and eliminated from the data set any pair where the text either did not match the information conveyed by the triple or was not a well-formed English sentence. 3. Getting verbalisations for input containing more than one triple. The verbalisations collected for single triples were used to construct input with bigger size. Thus, for input with a number of triples more than one, the crowd was asked to merge the sentences corresponding to each triple (obtained in step 2) into a natural sounding text. In such a way, we diminish the risk of having misinterpretations of the original semantics of a data unit. Contributors were also encouraged to change the order, and the wording of sentences, while writing their texts. For each data unit, we collected three verbalisations. 4. Verifying the quality of the collected texts. The verbalisations obtained in Step 3 were verified through crowdsourcing. Each verbalisation collected in Step 3 was displayed to CrowdFlower contributors together with the corresponding set of triples. Then the crowd was asked to assess its fluency, semantic adequacy, and grammaticality. Those criteria were checked by asking the follow183 # Triples 1 2 3 4 5 6 7 # Tokens 4/30/10.48 11/45/22.97 7/37/16.96 17/60/36.38 14/53/29.61 29/80/49.14 24/73/42.95 # Sentences 1/2/1.00 1/4/1.23 1/3/1.02 1/5/2.05 1/4/1.64 1/6/2.85 1/5/2.42 Table 2: Text statistics from crowdsourcing for triple sets of varying sizes (min/max/avg). ing three questions: Does the text sound fluent and natural? Does the text contain all and only the information from the data? Is the text good English (no spelling or grammatical mistakes)? We collected five answers per verbalisation. A verbalisation was considered bad, if it received three negative answers in at least one criterion. After the verification step, the total corpus loss was of 8.7%. An example of rejected verbalisation can be found in Example (2). The verbalisation was dropped due to the lack of fluency (awkward lexicalisation of the property club). (2) (AEK Athens F.C. manager Gus Poyet) (Gus Poyet club Chelsea F.C.) AEK Athens F.C. are managed by Gus Poyet, who is in Chelsea F.C. Table 2 shows some statistics about the texts obtained using our crowdsourcing procedure for triple sets of size one to seven. 4 Comparing Benchmarks We now compare a dataset created using our dataset creation framework (henceforth WEBNLG) with the dataset of Wen et al. (2016)9 (henceforth, RNNLG). Example 3 shows a sample data-text pair taken from the RNNLG dataset. (3) Dialog Moves recommend(name=caerus 33;type=television; screensizerange=medium;family=t5;hasusbport=true) The caerus 33 is a medium television in the T5 family that’s USB-enabled. As should be clear from the discussion in Section 2 and 3, both datasets are similar in that, in both cases, data is built from ontological information and text is crowdsourced from the data. An important difference between the two datasets is that, while the RNNLG data was constructed by enumerating possible combinations of dialog act types and attribute-value pairs, the WEBNLG data is created using a sophisticated content selection procedure geared at producing sets of data 9https://github.com/shawnwun/RNNLG units that are relevant for a given ontological category and that are varied in terms of size, shape and content. We now investigate the impact of this difference on the two datasets (WEBNLG and RNNLG). To assess the degree to which both datasets support the generation of linguistically varied text requiring complex micro-planning operations, we examine a number of data and text related metrics. We also compare the results of an out-of-the-box sequence-to-sequence model as a way to estimate the complexity of the learning task induced by each dataset. WEBNLG RNNLG Nb. Input 5068 22225 Nb. Data-Text Pairs 13339 30842 Nb. Domains 6 4 Nb. Attributes 172 108 Nb. Input Patterns 2108 2155 Nb. Input / Nb Input Pattern 2.40 10.31 Nb. Input Shapes 58 6 Table 3: Comparing WEBNLG and RNNLG datasets. Attributes are properties in RDF triples or slots in dialog acts. 4.1 Data Comparison Terminology. The attributes in the RNNLG dataset can be viewed as binary relations between the object talked about (a restaurant, a laptop, a TV or a hotel) and a value. Similarly, in the WEBNLGdataset, DBpedia RDF properties relate a subject entity to an object which can be either an entity or a datatype value. In what follows, we refer to both as attributes. Table 3 shows several statistics which indicate that, while the RNNLG dataset is larger than WEBNLG, WEBNLG is much more diverse in terms of attributes, input patterns and input shapes. Number of attributes. As illustrated in Example (4) below, different attributes can be lexicalised using different parts of speech. A dataset with a larger number of attributes is therefore more likely to induce texts with greater syntactic variety. (4) Verb: X title Y / X served as Y Relational noun: X nationality Y / X’s nationality is Y Preposition: X country Y / X is in Y Adjective: X nationality USA / X is American 184 As shown in Table 3, WEBNLG has a more diverse attribute set than RNNLG both in absolute (172 attributes in WEBNLG against 108 in RNNLG) and in relative terms (RNNLG is a little more than twice as large as WEBNLG). Number of input patterns. Since attributes may give rise to lexicalisation with different parts of speech, the sets of attributes present in an input (input pattern)10 indirectly determine the syntactic realisation of the corresponding text. Hence a higher number of input patterns will favour a higher number of syntactic realisations. This is exemplified in Example (5) where two inputs with the same number of attributes give rise to texts with different syntactic forms. While in Example (5a), the attribute set {country, location, startDate} is realised by a passive (is located), an apposition (Australia) and a deverbal nominal (its construction), in Example (5b), the attribute set {almaMater, birthPlace, selection} induced a passive (was born) and two VP coordinations (graduated and joined). (5) a. (‘108 St Georges Terrace location Perth’, ‘Perth country Australia’, ‘108 St Georges Terrace startDate 1981’) country, location, startDate 108 St. Georges Terrace is located in Perth, Australia. Its construction began in 1981. passive, apposition, deverbal nominal b. (‘William Anders selection 1963’, ‘William Anders birthPlace British Hong Kong’, ‘William Anders almaMater ”AFIT, M.S. 1962”’) almaMater, birthPlace, selection William Anders was born in British Hong Kong, graduated from AFIT in 1962, and joined NASA in 1963. passive, VP coordination, VP coordination Again, despite the much larger size of the RNNLG dataset, the number of input patterns in both datasets is almost the same. That is, the relative variety in input patterns is higher in WEBNLG. Number of input / Number of input patterns. The ratio between number of inputs and the number of input patterns has an important impact both in terms of linguistic diversity and in terms of learning complexity. A large ratio indicates a “repetitive dataset” where the same pattern is instantiated a high number of times. While this 10Recall from section 3 that input patterns are inputs where subjects and objects have been remove thus, in essence, an input pattern is the set of all the attributes occurring in a given input. facilitates learning, this also reduces linguistic coverage (less combinations of structures can be learned) and may induce over-fitting. Note that because datasets are typically delexicalised when training NLG models (cf. e.g., Wen et al. 2015 and Lampouras and Vlachos 2016), at training time, different instantiations of the same input pattern reduce to identical input. The two datasets markedly differ on this ratio which is five times lower in WEBNLG. While in WEBNLG, the same pattern is instantiated in average 2.40 times, it is instantiated 10.31 times in average in RNNLG. From a learning perspective, this means that the RNNLG dataset facilitates learning but also makes it harder to assess how well systems trained on it can generalise to handle unseen input. Input shape. As mentioned in Section 3, in the RNNLG dataset, all inputs can be viewed as trees of depth one while in the WEBNLG dataset, input may have various shapes. As a result, RNNLG texts will be restricted to syntactic forms which permit expressing such multiple predications of the same entity e.g., subject relative clause, VP and sentence coordination etc. In contrast, the trees extracted by the WEBNLG content selection procedure may be of depth five and therefore allow for further syntactic constructs such as object relative clause and passive participles (cf. Figure 1). We can show this empirically as well that WEBNLG is far more diverse than RNNLG in terms of input shapes. The RNNLG dataset has only 6 distinct shapes and all of them are of depth 1, i.e., all (attribute, value) pairs in an input are siblings to each other. In contrast, the WEBNLG dataset has 58 distinct shapes, out of which only 7 shapes are with depth 1, all others have depth more than 1 and they cover 49.6% of all inputs. 4.2 Text Comparison Table 4 gives some statistics about the texts contained in each dataset. (6) (Alan Bean birthDate “1932-03-15”) Alan Bean was born on March 15, 1932. (7) (‘Alan Bean nationality United States’, ‘Alan Bean birthDate “1932-03-15”’, ‘Alan Bean almaMater “UT Austin, B.S. 1955”’, ‘Alan Bean birthPlace Wheeler, Texas’, ‘Alan Bean selection 1963’) Alan Bean was an American astronaut, born on March 15, 1932 in Wheeler, Texas. He received a Bachelor of Science degree at the University of Texas at Austin in 1955 and was chosen by NASA in 1963. 185 As illustrated by the contrast between Examples (6) and (7) above, text length (number of tokens per text) and the number of sentences per text are strong indicators of the complexity of the generation task. We use the Stanford Part-Of-Speech Tagger and Parser version 3.5.2 (dated 2015-0420, Manning et al. 2014) to tokenize and to perform sentence segmentation on text. As shown in Table 4, WEBNLG’s texts are longer both in terms of tokens and in terms of number of sentences per text. Another difference between the two datasets is that WEBNLG contains a higher number of text per input thereby providing a better basis for learning paraphrases. WEBNLG RNNLG Nb. Text / Input 2.63 1.38 Text Length 24.36/23/4/80 18.37/19/1/76 (avg/median/min/max) Nb. Sentence / Text 1.45/1/1/6 1.25/1/1/6 (avg/median/min/max) Nb. Tokens 290479 531871 Nb. Types 2992 3524 Lexical Sophistication 0.69 0.54 CTTR 3.93 3.42 Table 4: Text statistics from WEBNLG and RNNLG. The size and the content of the vocabulary is another important factor in ensuring the learning of wide coverage generators. While a large vocabulary makes the learning problem harder, it also allows for larger coverage. WEBNLG exhibits a higher corrected type-token ratio (CTTR), which indicates greater lexical variety, and higher lexical sophistication (LS). Lexical sophistication measures the proportion of relatively unusual or advanced word types in the text. In practice, LS is the proportion of lexical word types (lemma) which are not in the list of 2,000 most frequent words generated from the British National Corpus11. Type-token ratio (TTR) is a measure of diversity defined as the ratio of the number of word types to the number of words in a text. To address the fact that this ratio tends to decrease with the size of the corpus, corrected TTR can be used to control for corpus size. It is defined as T/ √ 2N, where T is the number of types and N the number of tokens. Overall, the results shown in Table 4 indicate that WEBNLG texts are both lexically more diverse (higher corrected type/token ratio) and more 11We compute LS and CTTR using the Lexical Complexity Analyzer developed by Lu (2012). sophisticated (higher proportion of unfrequent words) than RNNLG’s. They also show a proportionately larger vocabulary for WEBNLG (2,992 types for 290,479 tokens in WEBNLG against 3,524 types for 531,871 tokens in RNNLG). 4.3 Neural Generation Richer and more varied datasets are harder to learn from. As a proof-of-concept study of the comparative difficulty of the two datasets with respect to machine learning, we compare the performance of a sequence-to-sequence model for generation on both datasets. We use the multi-layered sequence-to-sequence model with attention mechanism described in (Vinyals et al., 2015).12 The model was trained with 3-layer LSTMs with 512 units each with a batch size of 64 and a learning rate of 0.5. To allow for a fair comparison, we use a similar amount of data (13K data-text pairs) for both datasets. As RNNLG is bigger in size than WEBNLG, we constructed a balanced sample of RNNLG which included equal number of instances per category (tv, laptop, etc). We use a 3:1:1 ratio for training, developement and testing. The training was done in two delexicalisation modes: fully and name only. In case of fully delexicalisation, all entities were replaced by their generic terms, whereas in name only mode only subjects were modified in that way. For instance, the triple (FC K¨oln manager Peter St¨oger) was delexicalised as (SportsTeam manager Manager) in the first mode, and as (SportsTeam manager Peter St¨oger) in the second mode. The delexicalisation in sentences was done using the exact match between entities and tokens. For training, we use all the available vocabulary. Input and output vocabulary sizes are reported in Table 5. Table 5 shows the perplexity results. In both modes, RNNLG yielded lower scores than WEBNLG. This is inline with the observations made above concerning the higher data diversity, larger vocabulary and more complex texts of 12We used the TensorFlow code available at https://github.com/tensorflow/models/ tree/master/tutorials/rnn/translate. Alternatively, we could have used the implementation of Wen et al. (2016) which is optimised for generation. However the code is geared toward dialog acts and modifying it to handle RDF triples is non trivial. Since the comparison aims at examining the relative performance of the same neural network on the two datasets, we used the tensor flow implementation instead. 186 WEBNLG. Similary, the BLEU score of the generated sentences (Papineni et al., 2002) is lower for WEBNLG suggesting again a dataset that is more complex and therefore more difficult to learn from. Delexicalisation Mode WEBNLG RNNLG Vocab size Fully 520, 2430 140, 1530 Name only 1130, 2940 570, 1680 Perplexity Fully 27.41 17.42 Name only 25.39 23.93 BLEU Fully 0.19 0.26 Name only 0.10 0.27 Table 5: Vocabulary sizes of input, output (number of tokens). Perplexity and BLEU scores. 5 Conclusion We presented a framework for building NLG datato-text training corpora from existing knowledge bases. One feature of our framework is that datasets created using this framework can be used for training and testing KB verbalisers an in particular, verbalisers for RDF knowledge bases. Following the development of the semantic web, many large scale datasets are encoded in the RDF language (e.g., MusicBrainz, FOAF, LinkedGeoData) and official institutions13 increasingly publish their data in this format. In this context, our framework is useful both for creating training data from RDF KB verbalisers and to increase the number of datasets available for training and testing NLG. Another important feature of our framework is that it permits creating semantically and linguistically diverse datasets which should support the learning of lexically and syntactically, wide coverage micro-planners. We applied our framework to DBpedia data and showed that although twice smaller than the largest corpora currently available for training data-to-text microplanners, the resulting dataset is more semantically and linguistically diverse. Despite the disparity in size, the number of attributes is comparable in the two datasets. The ratio between input and input patterns is five times lower in our dataset thereby making learning harder but also diminishing the risk of overfitting and providing for wider linguistic coverage. Conversely, the ratio of text per input is twice higher thereby providing better support for learning paraphrases. 13See http://museum-api.pbworks.com for examples. We have recently released a first version of the WebNLG dataset in the context of a shared task on micro-planning14. This new dataset consists of 21,855 data/text pairs with a total of 8,372 distinct data input. The input describes entities belonging to 9 distinct DBpedia categories namely, Astronaut, University, Monument, Building, ComicsCharacter, Food, Airport, SportsTeam and WrittenWork. The WebNLG data is licensed under the following license: CC Attribution-NoncommercialShare Alike 4.0 International and can be downloaded at http://talc1.loria.fr/ webnlg/stories/challenge.html. Recently, several sequence-to-sequence models have been proposed for generation. Our experiments suggest that these are not optimal when it comes to generate linguistically complex texts from rich data. More generally, they indicate that the data-to-text corpora built by our framework are challenging for such models. We hope that the WEBNLG dataset which we have made available for the WEBNLG shared task will drive the deep learning community to take up this new challenge and work on the development of neural generators that can handle the generation of KB verbalisers and of linguistically rich texts. Acknowledgments The research presented in this paper was partially supported by the French National Research Agency within the framework of the WebNLG Project (ANR-14-CE24-0033). The third author is supported by the H2020 project SUMMA (under grant agreement 688139). References Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2012. Abstract meaning representation (AMR) 1.0 specification. In Proceedings of EMNLP. Eva Banik, Claire Gardent, and Eric Kow. 2013. The KBGen challenge. In Proceedings of ENLG. Anja Belz, Michael White, Dominic Espinosa, Eric Kow, Deirdre Hogan, and Amanda Stent. 2011. The 14The test data for the WEBNLG challenge will be released on August 18th, 2017 and preliminary results will be presented and discussed at INLG 2017, https:// eventos.citius.usc.es/inlg2017/index. 187 first surface realisation shared task: Overview and evaluation results. In Proceedings of ENLG. David L Chen and Raymond J Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In Proceedings of ICML. Gerasimos Lampouras and Andreas Vlachos. 2016. Imitation learning for language generation from unaligned data. In Proceedings of COLING. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural Text Generation from Structured Data with Application to the Biography Domain. In Proceedings of EMNLP. Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning Semantic Correspondences with Less Supervision. In Proceedings of ACL-IJCNLP. Xiaofei Lu. 2012. The relationship of lexical richness to the quality of ESL learners’ oral narratives. The Modern Language Journal 96(2):190–208. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of ACL:System Demonstrations. Pablo N Mendes, Max Jakob, and Christian Bizer. 2012. DBpedia: A Multilingual Cross-domain Knowledge Base. In Proceedings of LREC. Jekaterina Novikova and Verena Rieser. 2016. The aNALoGuE challenge: Non aligned language generation. In Proceedings of INLG. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of ACL. Laura Perez-Beltrachini, Rania Mohamed Sayed, and Claire Gardent. 2016. Building RDF content for Data-to-Text generation. In Proceedings of COLING. Adwait Ratnaparkhi. 2000. Trainable methods for surface natural language generation. In Proceedings of NAACL. Andreas Stolcke. 2002. SRILM – An extensible language modeling toolkit. In Proceedings of ICSLP. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Proceedings of NIPS. Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016. Multi-domain neural network language generation for spoken dialogue systems. In Proceedings of NAACL-HLT. Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Proceedings of EMNLP. 188
2017
17
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1857–1869 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1170 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1857–1869 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1170 Towards a Seamless Integration of Word Senses into Downstream NLP Applications Mohammad Taher Pilehvar2, Jose Camacho-Collados1, Roberto Navigli1 and Nigel Collier2 1Department of Computer Science, Sapienza University of Rome 2Department of Theoretical and Applied Linguistics, University of Cambridge 1{collados,navigli}@di.uniroma1.it 2{mp792,nhc30}@cam.ac.uk Abstract Lexical ambiguity can impede NLP systems from accurate understanding of semantics. Despite its potential benefits, the integration of sense-level information into NLP systems has remained understudied. By incorporating a novel disambiguation algorithm into a state-of-the-art classification model, we create a pipeline to integrate sense-level information into downstream NLP applications. We show that a simple disambiguation of the input text can lead to consistent performance improvement on multiple topic categorization and polarity detection datasets, particularly when the fine granularity of the underlying sense inventory is reduced and the document is sufficiently large. Our results also point to the need for sense representation research to focus more on in vivo evaluations which target the performance in downstream NLP applications rather than artificial benchmarks. 1 Introduction As a general trend, most current Natural Language Processing (NLP) systems function at the word level, i.e. individual words constitute the most fine-grained meaning-bearing elements of their input. The word level functionality can affect the performance of these systems in two ways: (1) it can hamper their efficiency in handling words that are not encountered frequently during training, such as multiwords, inflections and derivations, and (2) it can restrict their semantic understanding to the level of words, with all their ambiguities, and thereby prevent accurate capture of the intended meanings. The first issue has recently been alleviated by techniques that aim to boost the generalisation power of NLP systems by resorting to sub-word or character-level information (Ballesteros et al., 2015; Kim et al., 2016). The second limitation, however, has not yet been studied sufficiently. A reasonable way to handle word ambiguity, and hence to tackle the second issue, is to semantify the input text: transform it from its surface-level semantics to the deeper level of word senses, i.e. their intended meanings. We take a step in this direction by designing a pipeline that enables seamless integration of word senses into downstream NLP applications, while benefiting from knowledge extracted from semantic networks. To this end, we propose a quick graph-based Word Sense Disambiguation (WSD) algorithm which allows high confidence disambiguation of words without much computation overload on the system. We evaluate the pipeline in two downstream NLP applications: polarity detection and topic categorization. Specifically, we use a classification model based on Convolutional Neural Networks which has been shown to be very effective in various text classification tasks (Kalchbrenner et al., 2014; Kim, 2014; Johnson and Zhang, 2015; Tang et al., 2015; Xiao and Cho, 2016). We show that a simple disambiguation of input can lead to performance improvement of a state-of-the-art text classification system on multiple datasets, particularly for long inputs and when the granularity of the sense inventory is reduced. Our pipeline is quite flexible and modular, as it permits the integration of different WSD and sense representation techniques. 2 Motivation With the help of an example news article from the BBC, shown in Figure 1, we highlight some of the potential deficiencies of word-based models. 1857 Figure 1: Excerpt of a news article from the BBC. Ambiguity. Language is inherently ambiguous. For instance, Mercedes, race, Hamilton and Formula can refer to several different entities or meanings. Current neural models have managed to successfully represent complex semantic associations by effectively analyzing large amounts of data. However, the word-level functionality of these systems is still a barrier to the depth of their natural language understanding. Our proposal is particularly tailored towards addressing this issue. Multiword expressions (MWE). MWE are lexical units made up of two or more words which are idiosyncratic in nature (Sag et al., 2002), e.g, Lewis Hamilton, Nico Rosberg and Formula 1. Most existing word-based models ignore the interdependency between MWE’s subunits and treat them as individual units. Handling MWE has been a long-standing problem in NLP and has recently received a considerable amount of interest (Tsvetkov and Wintner, 2014; Salehi et al., 2015). Our pipeline facilitates this goal. Co-reference. Co-reference resolution of concepts and entities is not explicitly tackled by our approach. However, thanks to the fact that words that refer to the same meaning in context, e.g., Formula 1-F1 or German Grand Prix-German GPHockenheim, are all disambiguated to the same concept, the co-reference issue is also partly addressed by our pipeline. 3 Disambiguation Algorithm Our proposal relies on a seamless integration of word senses in word-based systems. The goal is to semantify the text prior to its being fed into the system by transforming its individual units from word surface form to the deeper level of word senses. The semantification step is mainly tailored Algorithm 1 Disambiguation algorithm Input: Input text T and semantic network N Output: Set of disambiguated senses ˆS 1: Graph representation of T: (S, E) ←getGraph(T, N) 2: ˆS ←∅ 3: for each iteration i ∈{1, ..., len(T)} 4: ˆs = argmaxs∈S |{(s, s′) ∈E : s′ ∈S}| 5: maxDeg = |{(ˆs, s′) ∈E : s′ ∈S}| 6: if maxDeg < θ|S| / 100 then 7: break 8: else 9: ˆS ←ˆS ∪{ˆs} 10: E ←E \ {(s, s′) : s ∨s′ ∈getLex(ˆs)} 11: return Disambiguation output ˆS towards resolving ambiguities, but it brings about other advantages mentioned in the previous section. The aim is to provide the system with an input of reduced ambiguity which can facilitate its decision making. To this end, we developed a simple graph-based joint disambiguation and entity linking algorithm which can take any arbitrary semantic network as input. The gist of our disambiguation technique lies in its speed and scalability. Conventional knowledge-based disambiguation systems (Hoffart et al., 2012; Agirre et al., 2014; Moro et al., 2014; Ling et al., 2015; Pilehvar and Navigli, 2014) often rely on computationally expensive graph algorithms, which limits their application to on-the-fly processing of large number of text documents, as is the case in our experiments. Moreover, unlike supervised WSD and entity linking techniques (Zhong and Ng, 2010; Cheng and Roth, 2013; Melamud et al., 2016; Limsopatham and Collier, 2016), our algorithm relies only on semantic networks and does not require any senseannotated data, which is limited to English and almost non-existent for other languages. Algorithm 1 shows our procedure for disambiguating an input document T. First, we retrieve from our semantic network the list of candidate senses1 for each content word, as well as semantic relationships among them. As a result, we obtain a graph representation (S, E) of the input text, where S is the set of candidate senses and E is the set of edges among different senses in S. The graph is, in fact, a small sub-graph of the input semantic network, N. Our algorithm then selects the best candidates iteratively. In each iteration, the 1As defined in the underlying sense inventory, up to trigrams. We used Stanford CoreNLP (Manning et al., 2014) for tokenization, Part-of-Speech (PoS) tagging and lemmatization. 1858 Figure 2: Simplified graph-based representation of a sample sentence. candidate sense that has the highest graph degree maxDeg is chosen as the winning sense: maxDeg = max s∈S |{(s, s′) ∈E : s′ ∈S}| (1) After each iteration, when a candidate sense ˆs is selected, all the possible candidate senses of the corresponding word (i.e. getLex(ˆs)) are removed from E (line 10 in the algorithm). Figure 2 shows a simplified version of the graph for a sample sentence. The algorithm would disambiguate the content words in this sentence as follows. It first associates Oasis with its rock band sense, since its corresponding node has the highest degree, i.e. 3. On the basis of this, the desert sense of Oasis and its link to the stone sense of rock are removed from the graph. In the second iteration, rock band is disambiguated as music band given that its degree is 2.2 Finally, Manchester is associated with its city sense (with a degree of 1). In order to enable disambiguating at different confidence levels, we introduce a threshold θ which determines the stopping criterion of the algorithm. Iteration continues until the following condition is fulfilled: maxDeg < θ|S| / 100. This ensures that the system will only disambiguate those words for which it has a high confidence and backs off to the word form otherwise, avoiding the introduction of unwanted noise in the data for uncertain cases or for word senses that are not defined in the inventory. 2For bigrams and trigrams whose individual words might also be disambiguated (such as rock and band in rock band), the longest unit has the highest priority (i.e. rock band). Figure 3: Text classification model architecture. 4 Classification Model In our experiments, we use a standard neural network based classification approach which is similar to the Convolution Neural Network classifier of Kim (2014) and the pioneering model of Collobert et al. (2011). Figure 3 depicts the architecture of the model. The network receives the concatenated vector representations of the input words, v1:n = v1⊕v2⊕· · ·⊕vn, and applies (convolves) filters F on windows of h words, mi = f(F.vi:i+h−1 + b), where b is a bias term and f() is a non-linear function, for which we use ReLU (Nair and Hinton, 2010). The convolution transforms the input text to a feature map m = [m1, m2, . . . , mn−h+1]. A max pooling operation then selects the most salient feature ˆm = max{m} for each filter. In the network of Kim (2014), the pooled features are directly passed to a fully connected softmax layer whose outputs are class probabilities. However, we add a recurrent layer before softmax in order to enable better capturing of longdistance dependencies. It has been shown by Xiao and Cho (2016) that a recurrent layer can replace multiple layers of convolution and be beneficial, particularly when the length of input text grows. Specifically, we use a Long Short-Term Memory (Hochreiter and Schmidhuber, 1997, LSTM) as our recurrent layer which was originally proposed to avoid the vanishing gradient problem and has proven its abilities in capturing distant dependencies. The LSTM unit computes three gate vectors 1859 (forget, input, and output) as follows: ft = σ(Wf gt + Uf ht−1 + bf), it = σ(Wi gt + Ui ht−1 + bi), ot = σ(Wo gt + Uo ht−1 + bo), (2) where W, U, and b are model parameters and g and h are input and output sequences, respectively. The cell state vector ct is then computed as ct = ft ct−1 + it tanh(˜ct) where ˜ct = Wc gt + Uc ht−1. Finally, the output sequence is computed as ht = ot tanh(ct). As for regularization, we used dropout (Hinton et al., 2012) after the embedding layer. We perform experiments with two configurations of the embedding layer: (1) Random, initialized randomly and updated during training, and (2) Pre-trained, initialized by pre-trained representations and updated during training. In the following section we describe the pre-trained word and sense representation used for the initialization of the second configuration. 4.1 Pre-trained Word and Sense Embeddings One of the main advantages of neural models is that they usually represent the input words as dense vectors. This can significantly boost a system’s generalisation power and results in improved performance (Zou et al., 2013; Bordes et al., 2014; Kim, 2014; Weiss et al., 2015, interalia). This feature also enables us to directly plug in pre-trained sense representations and check them in a downstream application. In our experiments we generate a set of sense embeddings by extending DeConf, a recent technique with state-of-the-art performance on multiple semantic similarity benchmarks (Pilehvar and Collier, 2016). We leave the evaluation of other representations to future work. DeConf gets a pre-trained set of word embeddings and computes sense embeddings in the same semantic space. To this end, the approach exploits the semantic network of WordNet (Miller, 1995), using the Personalized PageRank (Haveliwala, 2002) algorithm, and obtains a set of sense biasing words Bs for a word sense s. The sense representation of s is then obtained using the following formula: ˆv(s) = 1 |Bs| |Bs| X i=1 e −i δ v(wi), (3) where δ is a decay parameter and v(wi) is the embedding of wi, i.e. the ith word in the sense biasing list of s, i.e. Bs. We follow Pilehvar and Collier (2016) and set δ = 5. Finally, the vector for sense s is calculated as the average of ˆv(s) and the embedding of its corresponding word. Owing to its reliance on WordNet’s semantic network, DeConf is limited to generating only those word senses that are covered by this lexical resource. We propose to use Wikipedia in order to expand the vocabulary of the computed word senses. Wikipedia provides a high coverage of named entities and domain-specific terms in many languages, while at the same time also benefiting from a continuous update by collaborators. Moreover, it can easily be viewed as a sense inventory where individual articles are word senses arranged through hyperlinks and redirections. Camacho-Collados et al. (2016b) proposed NASARI3, a technique to compute the most salient words for each Wikipedia page. These salient words were computed by exploiting the structure and content of Wikipedia and proved effective in tasks such as Word Sense Disambiguation (Tripodi and Pelillo, 2017; Camacho-Collados et al., 2016a), knowledge-base construction (Lieto et al., 2016), domain-adapted hypernym discovery (Espinosa-Anke et al., 2016; CamachoCollados and Navigli, 2017) or object recognition (Young et al., 2016). We view these lists as biasing words for individual Wikipedia pages, and then leverage the exponential decay function (Equation 3) to compute new sense embeddings in the same semantic space. In order to represent both WordNet and Wikipedia sense representations in the same space, we rely on the WordNetWikipedia mapping provided by BabelNet4 (Navigli and Ponzetto, 2012). For the WordNet synsets which are mapped to Wikipedia pages in BabelNet, we average the corresponding Wikipediabased and WordNet-based sense embeddings. 4.2 Pre-trained Supersense Embeddings It has been argued that WordNet sense distinctions are too fine-grained for many NLP applications (Hovy et al., 2013). The issue can be tackled by grouping together similar senses of the same word, either using automatic clustering techniques (Navigli, 2006; Agirre and Lopez, 2003; Snow et al., 2007) or with the help of WordNet’s lexicographer 3We downloaded the salient words for Wikipedia pages (NASARI English lexical vectors, version 3.0) from http://lcl. uniroma1.it/nasari/ 4We used the Java API from http://babelnet.org 1860 files5. Various applications have been shown to improve upon moving from senses to supersenses (R¨ud et al., 2011; Severyn et al., 2013; Flekova and Gurevych, 2016). In WordNet’s lexicographer files there are a total of 44 sense clusters, referred to as supersenses, for categories such as event, animal, and quantity. In our experiments we use these supersenses in order to reduce granularity of our WordNet and Wikipedia senses. To generate supersense embeddings, we simply average the embeddings of senses in the corresponding cluster. 5 Evaluation We evaluated our model on two classification tasks: topic categorization (Section 5.2) and polarity detection (Section 5.3). In the following section we present the common experimental setup. 5.1 Experimental setup Classification model. Throughout all the experiments we used the classification model described in Section 4. The general architecture of the model was the same for both tasks, with slight variations in hyperparameters given the different natures of the tasks, following the values suggested by Kim (2014) and Xiao and Cho (2016) for the two tasks. Hyperparameters were fixed across all configurations in the corresponding tasks. The embedding layer was fixed to 300 dimensions, irrespective of the configuration, i.e. Random and Pre-trained. For both tasks the evaluation was carried out by 10-fold cross-validation unless standard trainingtesting splits were available. The disambiguation threshold θ (cf. Section 3) was tuned on the training portion of the corresponding data, over seven values in [0,3] in steps of 0.5.6 We used Keras (Chollet, 2015) and Theano (Team, 2016) for our model implementations. Semantic network. The integration of senses was carried out as described in Section 3. For disambiguating with both WordNet and Wikipedia senses we relied on the joint semantic network of Wikipedia hyperlinks and WordNet via the mapping provided by BabelNet.7 5https://wordnet.princeton.edu/man/lexnames.5WN.html 6We observed that values higher than 3 led to very few disambiguations. While the best results were generally achieved in the [1.5,2.5] range, performance differences across threshold values were not statistically significant in most cases. 7For simplicity we refer to this joint sense inventory as Wikipedia, but note that WordNet senses are also covered. Pre-trained word and sense embeddings. Throughout all the experiments we used Word2vec (Mikolov et al., 2013) embeddings, trained on the Google News corpus.8 We truncated this set to its 250K most frequent words. We also used WordNet 3.0 (Fellbaum, 1998) and the Wikipedia dump of November 2014 to compute the sense embeddings (see Section 4.1). As a result, we obtained a set of 757,262 sense embeddings in the same space as the pre-trained Word2vec word embeddings. We used DeConf (Pilehvar and Collier, 2016) as our pre-trained WordNet sense embeddings. All vectors had a fixed dimensionality of 300. Supersenses. In addition to WordNet senses, we experimented with supersenses (see Section 4.2) to check how reducing granularity would affect system performance. For obtaining supersenses in a given text we relied on our disambiguation pipeline and simply clustered together senses belonging to the same WordNet supersense. Evaluation measures. We report the results in terms of standard accuracy and F1 measures.9 5.2 Topic Categorization The task of topic categorization consists of assigning a label (i.e. topic) to a given document from a pre-defined set of labels. 5.2.1 Datasets For this task we used two newswire and one medical topic categorization datasets. Table 1 summarizes the statistics of each dataset.10 The BBC news dataset11 (Greene and Cunningham, 2006) comprises news articles taken from BBC, divided into five topics: business, entertainment, politics, sport and tech. Newsgroups (Lang, 1995) is a collection of 11,314 documents for training and 7532 for testing12 divided into six topics: computing, sport and motor vehicles, science, politics, reli8https://code.google.com/archive/p/word2vec/ 9Since all models in our experiments provide full coverage, accuracy and F1 denote micro- and macro-averaged F1, respectively (Yang, 1999). 10The coverage of the datasets was computed using the 250K top words in the Google News Word2vec embeddings. 11http://mlg.ucd.ie/datasets/bbc.html 12We used the train-test partition available at http://qwone. com/∼jason/20Newsgroups/ 1861 Dataset Domain No. of classes No. of docs Avg. doc. size Size of vocab. Coverage Evaluation BBC News 5 2,225 439.5 35,628 87.4% 10 cross valid. Newsgroups News 6 18,846 394.0 225,046 83.4% Train-Test Ohsumed Medical 23 23,166 201.2 65,323 79.3% Train-Test Table 1: Statistics of the topic categorization datasets. Initialization Input type BBC News Newsgroups Ohsumed Acc F1 Acc F1 Acc F1 Random Word 93.0 92.8 87.7 85.6 30.1 20.7 Sense WordNet 93.5 93.3 88.1 86.9 27.2† 18.3 Wikipedia 92.7 92.5 86.7 84.9 29.7 20.9 Supersense WordNet 93.6 93.4 90.1∗ 89.0 31.8∗ 22.0 Wikipedia 94.6∗ 94.4 88.5 85.8 31.1 21.3 Pre-trained Word 97.6 97.5 91.1 90.6 29.4 20.1 Sense WordNet 97.3 97.1 90.2 88.6 30.2 20.4 Wikipedia 96.3 96.2 89.6† 88.9 32.4 22.3 Supersense WordNet 96.8 96.7 89.6 88.9 29.5 19.9 Wikipedia 96.9 96.9 88.6 87.4 30.6∗ 20.3 Table 2: Classification performance at the word, sense, and supersense levels with random and pretrained embedding initialization. We show in bold those settings that improve the word-based model. gion and sales.13 Finally, Ohsumed14 is a collection of medical abstracts from MEDLINE, an online medical information database, categorized according to 23 cardiovascular diseases. For our experiments we used the partition split of 10,433 documents for training and 12,733 for testing.15 5.2.2 Results Table 2 shows the results of our classification model and its variants on the three datasets.16 When the embedding layer is initialized randomly, the model integrated with word senses consistently improves over the word-based model, particularly when the fine-granularity of the underlying sense inventory is reduced using supersenses (with statistically significant gains on the three datasets). This highlights the fact that a simple disambiguation of the input can bring about performance gain for a state-of-the-art classification system. Also, 13The dataset has 20 fine-grained categories clustered into six general topics. We used the coarse-grained labels for their clearer distinction and consistency with BBC topics. 14ftp://medir.ohsu.edu/pub/ohsumed 15http://disi.unitn.it/moschitti/corpora.htm 16Symbols ∗and † indicate the sense-based model with the smallest margin to the word-based model whose accuracy is statistically significant at 0.95 confidence level according to unpaired t-test (∗for positive and † for negative change). the better performance of supersenses suggests that the sense distinctions of WordNet are too finegrained for the topic categorization task. However, when pre-trained representations are used to initialize the embedding layer, no improvement is observed over the word-based model. This can be attributed to the quality of the representations, as the model utilizing them was unable to benefit from the advantage offered by sense distinctions. Our results suggest that research in sense representation should put special emphasis on real-world evaluations on benchmarks for downstream applications, rather than on artificial tasks such as word similarity. In fact, research has previously shown that word similarity might not constitute a reliable proxy to measure the performance of word embeddings in downstream applications (Tsvetkov et al., 2015; Chiu et al., 2016). Among the three datasets, Ohsumed proves to be the most challenging one, mainly for its larger number of classes (i.e. 23) and its domain-specific nature (i.e. medicine). Interestingly, unlike for the other two datasets, the introduction of pre-trained word embeddings to the system results in reduced performance on Ohsumed. This suggests that general domain embeddings might not be beneficial 1862 in specialized domains, which corroborates previous findings by Yadav et al. (2017) on a different task, i.e. entity extraction. This performance drop may also be due to diachronic issues (Ohsumed dates back to the 1980s) and low coverage: the pre-trained Word2vec embeddings cover 79.3% of the words in Ohsumed (see Table 1), in contrast to the higher coverage on the newswire datasets, i.e. Newsgroups (83.4%) and BBC (87.4%). However, also note that the best overall performance is attained when our pre-trained Wikipedia sense embeddings are used. This highlights the effectiveness of Wikipedia in handling domain-specific entities, thanks to its broad sense inventory. 5.3 Polarity Detection Polarity detection is the most popular evaluation framework for sentiment analysis (Dong et al., 2015). The task is essentially a binary classification which determines if the sentiment of a given sentence or document is negative or positive. 5.3.1 Datasets For the polarity detection task we used five standard evaluation datasets. Table 1 summarizes statistics. PL04 (Pang and Lee, 2004) is a polarity detection dataset composed of full movie reviews. PL0518 (Pang and Lee, 2005), instead, is composed of short snippets from movie reviews. RTC contains critic reviews from Rotten Tomatoes19, divided into 436,000 training and 2,000 test instances. IMDB (Maas et al., 2011) includes 50,000 movie reviews, split evenly between training and test. Finally, we used the Stanford Sentiment dataset (Socher et al., 2013), which associates each review with a value that denotes its sentiment. To be consistent with the binary classification of the other datasets, we removed the neutral phrases according to the dataset’s scale (between 0.4 and 0.6) and considered the reviews whose values were below 0.4 as negative and above 0.6 as positive. This resulted in a binary polarity dataset of 119,783 phrases. Unlike the previous four datasets, this dataset does not contain an even distribution of positive and negative labels. 5.3.2 Results Table 4 lists accuracy performance of our classification model and all its variants on five polar18Both PL04 and PL05 were downloaded from http:// www.cs.cornell.edu/people/pabo/movie-review-data/ 19http://www.rottentomatoes.com 0 100 200 300 400 500 600 700 800 −1.5 −1 −0.5 0 0.5 1 1.5 Stanford PL05 RTC Ohsumed IMDB Newsgroups BBC PL04 Average document size Accuracy gain Figure 4: Relation between average document size and performance improvement using Wikipedia supersenses with random initialization. ity detection datasets. Results are generally better than those of Kim (2014), showing that the addition of the recurrent layer to the model (cf. Section 4) was beneficial. However, interestingly, no consistent performance gain is observed in the polarity detection task, when the model is provided with disambiguated input, particularly for datasets with relatively short reviews. We attribute this to the nature of the task. Firstly, given that words rarely happen to be ambiguous with respect to their sentiment, the semantic sense distinctions provided by the disambiguation stage do not assist the classifier in better decision making, and instead introduce data sparsity. Secondly, since the datasets mostly contain short texts, e.g., sentences or snippets, the disambiguation algorithm does not have sufficient context to make high-confidence judgements, resulting in fewer disambiguations or less reliable ones. In the following section we perform a more in-depth analysis of the impact of document size on the performance of our sense-based models. 5.4 Analysis Document size. A detailed analysis revealed a relation between document size (the number of tokens) and performance gain of our sense-level model. We show in Figure 4 how these two vary for our most consistent configuration, i.e. Wikipedia supersenses, with random initialization. Interestingly, as a general trend, the performance gain increases with average document size, irre19Stanford is the only unbalanced dataset, but F1 results were almost identical to accuracy. 1863 Dataset Type No. of docs Avg. doc. size Vocabulary size Coverage Evaluation RTC Snippets 438,000 23.4 128,056 81.3% Train-Test IMDB Reviews 50,000 268.8 140,172 82.5% Train-Test PL05 Snippets 10,662 21.5 19,825 81.3% 10 cross valid. PL04 Reviews 2,000 762.1 45,077 82.4% 10 cross valid. Stanford Phrases 119,783 10.0 19,400 81.6% 10 cross valid. Table 3: Statistics of the polarity detection datasets. Initialization Input type RTC IMDB PL05 PL04 Stanford Random Word 83.6 87.7 77.3 67.9 91.8 Sense WordNet 83.2 87.4 76.6 67.4 91.3 Wikipedia 83.1 88.0 75.9† 67.1 91.0 Supersense WordNet 84.4 88.0 75.9 66.2 91.4† Wikipedia 83.1 88.4∗ 75.8 69.3∗ 91.0 Pre-trained Word 85.5 88.3 80.2 72.5 93.1 Sense WordNet 83.4 88.3 79.2 69.7† 92.6 Wikipedia 83.8 87.0† 79.2 73.1 92.3 Supersense WordNet 85.2 88.8 79.5 73.8 92.7† Wikipedia 84.2 87.9 78.3† 72.6 92.2 Table 4: Accuracy performance on five polarity detection datasets. Given that polarity datasets are balanced17, we do not report F1 which would have been identical to accuracy. spective of the classification task. We attribute this to two main factors: 1. Sparsity: Splitting a word into multiple word senses can have the negative side effect that the corresponding training data for that word is distributed among multiple independent senses. This reduces the training instances per word sense, which might affect the classifier’s performance, particularly when senses are semantically related (in comparison to fine-grained senses, supersenses address this issue to some extent). 2. Disambiguation quality: As also mentioned previously, our disambiguation algorithm requires the input text to be sufficiently large so as to create a graph with an adequate number of coherent connections to function effectively. In fact, for topic categorization, in which the documents are relatively long, our algorithm manages to disambiguate a larger proportion of words in documents with high confidence. The lower performance of graphbased disambiguation algorithms on short texts is a known issue (Moro et al., 2014; Raganato et al., 2017), the tackling of which remains an area of exploration. Senses granularity. Our results showed that reducing fine-granularity of sense distinctions can be beneficial to both tasks, irrespective of the underlying sense inventory, i.e. WordNet or Wikipedia, which corroborates previous findings (Hovy et al., 2013; Flekova and Gurevych, 2016). This suggests that text classification does not require fine-grained semantic distinctions. In this work we used a simple technique based on WordNet’s lexicographer files for coarsening senses in this sense inventory as well as in Wikipedia. We leave the exploration of this promising area as well as the evaluation of other granularity reduction techniques for WordNet (Snow et al., 2007; Bhagwani et al., 2013) and Wikipedia (Dandala et al., 2013) sense inventories to future work. 6 Related Work The past few years have witnessed a growing research interest in semantic representation, mainly as a consequence of the word embedding tsunami 1864 (Mikolov et al., 2013; Pennington et al., 2014). Soon after their introduction, word embeddings were integrated into different NLP applications, thanks to the migration of the field to deep learning and the fact that most deep learning models view words as dense vectors. The waves of the word embedding tsunami have also lapped on the shores of sense representation. Several techniques have been proposed that either extend word embedding models to cluster contexts and induce senses, usually referred to as unsupervised sense representations (Sch¨utze, 1998; Reisinger and Mooney, 2010; Huang et al., 2012; Neelakantan et al., 2014; Guo et al., 2014; Tian et al., 2014; ˇSuster et al., 2016; Ettinger et al., 2016; Qiu et al., 2016) or exploit external sense inventories and lexical resources for generating sense representations for individual meanings of words (Chen et al., 2014; Johansson and Pina, 2015; Jauhar et al., 2015; Iacobacci et al., 2015; Rothe and Sch¨utze, 2015; Camacho-Collados et al., 2016b; Mancini et al., 2016; Pilehvar and Collier, 2016). However, the integration of sense representations into deep learning models has not been so straightforward, and research in this field has often opted for alternative evaluation benchmarks such as WSD, or artificial tasks, such as word similarity. Consequently, the problem of integrating sense representations into downstream NLP applications has remained understudied, despite the potential benefits it can have. Li and Jurafsky (2015) proposed a “multi-sense embedding” pipeline to check the benefit that can be gained by replacing word embeddings with sense embeddings in multiple tasks. With the help of two simple disambiguation algorithms, unsupervised sense embeddings were integrated into various downstream applications, with varying degrees of success. Given the interdependency of sense representation and disambiguation in this model, it is very difficult to introduce alternative algorithms into its pipeline, either to benefit from the state of the art, or to carry out an evaluation. Instead, our pipeline provides the advantage of being modular: thanks to its use of disambiguation in the pre-processing stage and use of sense representations that are linked to external sense inventories, different WSD techniques and sense representations can be easily plugged in and checked. Along the same lines, Flekova and Gurevych (2016) proposed a technique for learning supersense representations, using automatically-annotated corpora. Coupled with a supersense tagger, the representations were fed into a neural network classifier as additional features to the word-based input. Through a set of experiments, Flekova and Gurevych (2016) showed that the supersense enrichment can be beneficial to a range of binary classification tasks. Our proposal is different in that it focuses directly on the benefits that can be gained by semantifying the input, i.e. reducing lexical ambiguity in the input text, rather than assisting the model with additional sources of knowledge. 7 Conclusion and Future Work We proposed a pipeline for the integration of sense level knowledge into a state-of-the-art text classifier. We showed that a simple disambiguation of the input can lead to consistent performance gain, particularly for longer documents and when the granularity of the underlying sense inventory is reduced. Our pipeline is modular and can be used as an in vivo evaluation framework for WSD and sense representation techniques. We release our code and data (including pre-trained sense and supersense embeddings) at https://pilehvar.github.io/ sensecnn/ to allow further checking of the choice of hyperparameters and to allow further analysis and comparison. We hope that our work will foster future research on the integration of senselevel knowledge into downstream applications. As future work, we plan to investigate the extension of the approach to other languages and applications. Also, given the promising results observed for supersenses, we plan to investigate taskspecific coarsening of sense inventories, particularly Wikipedia, or the use of SentiWordNet (Baccianella et al., 2010), which could be more suitable for polarity detection. Acknowledgments The authors gratefully acknowledge the support of the MRC grant No. MR/M025160/1 for PheneBank and ERC Consolidator Grant MOUSSE No. 726487. Jose Camacho-Collados is supported by a Google Doctoral Fellowship in Natural Language Processing. Nigel Collier is supported by EPSRC Grant No. EP/M005089/1. We thank Jim McManus for his suggestions on the manuscript and the anonymous reviewers for their helpful comments. 1865 References Eneko Agirre, Oier Lopez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics 40(1):57–84. Eneko Agirre and Oier Lopez. 2003. Clustering WordNet word senses. In Proceedings of Recent Advances in Natural Language Processing. Borovets, Bulgaria, pages 121–130. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In LREC. volume 10, pages 2200–2204. Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. In Proceedings of EMNLP. Sumit Bhagwani, Shrutiranjan Satapathy, and Harish Karnick. 2013. Merging word senses. In Proceedings of TextGraphs-8 Graph-based Methods for Natural Language Processing. Seattle, Washington, USA, pages 11–19. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. In EMNLP. Jos´e Camacho-Collados, Claudio Delli Bovi, Alessandro Raganato, and Roberto Navigli. 2016a. A Large-Scale Multilingual Disambiguation of Glosses. In Proceedings of LREC. Portoroz, Slovenia, pages 1701–1708. Jose Camacho-Collados and Roberto Navigli. 2017. BabelDomains: Large-Scale Domain Labeling of Lexical Resources. In Proceedings of EACL (2). Valencia, Spain. Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2016b. Nasari: Integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities. Artificial Intelligence 240:36–64. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of EMNLP. Doha, Qatar, pages 1025–1035. Xiao Cheng and Dan Roth. 2013. Relational inference for wikification. In Proceedings of EMNLP. Seattle, Washington, pages 1787–1796. Billy Chiu, Anna Korhonen, and Sampo Pyysalo. 2016. Intrinsic evaluation of word vectors fails to predict extrinsic performance. In Proceedings of the Workshop on Evaluating Vector Space Representations for NLP, ACL. Franois Chollet. 2015. Keras. https://github.com/ fchollet/keras. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12:2493–2537. Bharath Dandala, Chris Hokamp, Rada Mihalcea, and Razvan C. Bunescu. 2013. Sense clustering using Wikipedia. In Proceedings of Recent Advances in Natural Language Processing. Hissar, Bulgaria, pages 164–171. Li Dong, Furu Wei, Shujie Liu, Ming Zhou, and Ke Xu. 2015. A statistical parsing framework for sentiment classification. Computational Linguistics 41(2):293–336. Luis Espinosa-Anke, Jose Camacho-Collados, Claudio Delli Bovi, and Horacio Saggion. 2016. Supervised distributional hypernym discovery via domain adaptation. In Proceedings of EMNLP. pages 424–435. Allyson Ettinger, Philip Resnik, and Marine Carpuat. 2016. Retrofitting sense-specific word vectors using parallel text. In Proceedings of NAACL-HLT. San Diego, California, pages 1378–1383. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA. Lucie Flekova and Iryna Gurevych. 2016. Supersense embeddings: A unified model for supersense interpretation, prediction, and utilization. In Proceedings of ACL. Derek Greene and P´adraig Cunningham. 2006. Practical solutions to the problem of diagonal dominance in kernel document clustering. In Proceedings of the 23rd International conference on Machine learning. ACM, pages 377–384. Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning sense-specific word embeddings by exploiting bilingual resources. In COLING. pages 497–507. Taher H. Haveliwala. 2002. Topic-sensitive PageRank. In Proceedings of the 11th International Conference on World Wide Web. Hawaii, USA, pages 517–526. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR abs/1207.0580. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Compasutation 9(8):1735–1780. Johannes Hoffart, Stephan Seufert, Dat Ba Nguyen, Martin Theobald, and Gerhard Weikum. 2012. Kore: keyphrase overlap relatedness for entity disambiguation. In Proceedings of CIKM. pages 545– 554. 1866 Eduard H. Hovy, Roberto Navigli, and Simone Paolo Ponzetto. 2013. Collaboratively built semistructured content and Artificial Intelligence: The story so far. Artificial Intelligence 194:2–27. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of ACL. Jeju Island, Korea, pages 873–882. Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. Sensembed: Learning sense embeddings for word and relational similarity. In Proceedings of ACL. Beijing, China, pages 95–105. Sujay Kumar Jauhar, Chris Dyer, and Eduard Hovy. 2015. Ontologically grounded multi-sense representation learning for semantic vector space models. In Proceedings of NAACL. Denver, Colorado, pages 683–693. Richard Johansson and Luis Nieto Pina. 2015. Embedding a semantic network in a word space. In Proceedings of NAACL. Denver, Colorado, pages 1428– 1433. Rie Johnson and Tong Zhang. 2015. Effective use of word order for text categorization with convolutional neural networks. In Proceedings of NAACL. Denver, Colorado, pages 103–112. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of ACL. Baltimore, USA, pages 655–665. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP. Doha, Qatar, pages 1746–1751. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Proceedings of AAAI. Phoenix, Arizona, pages 2741–2749. Ken Lang. 1995. Newsweeder: Learning to filter netnews. In Proceedings of the 12th International Conference on Machine Learning. Tahoe City, California, pages 331–339. Jiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In Proceedings of EMNLP. Lisbon, Portugal, pages 683–693. Antonio Lieto, Enrico Mensa, and Daniele P Radicioni. 2016. A resource-driven approach for anchoring linguistic resources to conceptual spaces. In AI* IA 2016 Advances in Artificial Intelligence, Springer, pages 435–449. Nut Limsopatham and Nigel Collier. 2016. Normalising medical concepts in social media texts by learning semantic representation. In Proceedings of ACL. Berlin, Germany, pages 1014–1023. Xiao Ling, Sameer Singh, and Daniel S Weld. 2015. Design challenges for entity linking. Transactions of the Association for Computational Linguistics 3:315–328. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of ACL-HLT. Portland, Oregon, USA, pages 142–150. Massimiliano Mancini, Jos´e Camacho-Collados, Ignacio Iacobacci, and Roberto Navigli. 2016. Embedding words and senses together via joint knowledge-enhanced training. CoRR abs/1612.02703. http://arxiv.org/abs/1612.02703. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. pages 55–60. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional lstm. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Berlin, Germany, pages 51–61. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. George A Miller. 1995. WordNet: a lexical database for english. Communications of the ACM 38(11):39–41. Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity Linking meets Word Sense Disambiguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL) 2:231–244. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning. pages 807–814. Roberto Navigli. 2006. Meaningful clustering of senses helps boost Word Sense Disambiguation performance. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL). Sydney, Australia, pages 105–112. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence 193:217– 250. 1867 Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proceedings of EMNLP. Doha, Qatar, pages 1059–1069. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of ACL. Barcelona, Spain, pages 51–61. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL. Ann Arbor, Michigan, pages 115–124. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP. pages 1532–1543. Mohammad Taher Pilehvar and Nigel Collier. 2016. De-conflated semantic representations. In Proceedings of EMNLP. Austin, TX, pages 1680–1690. Mohammad Taher Pilehvar and Roberto Navigli. 2014. A large-scale pseudoword-based evaluation framework for state-of-the-art Word Sense Disambiguation. Computational Linguistics 40(4). Lin Qiu, Kewei Tu, and Yong Yu. 2016. Contextdependent sense embedding. In Proceedings of EMNLP. Austin, Texas, pages 183–191. Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Proceedings of EACL. Valencia, Spain, pages 99–110. Joseph Reisinger and Raymond J. Mooney. 2010. Multi-prototype vector-space models of word meaning. In Proceedings of ACL. pages 109–117. Sascha Rothe and Hinrich Sch¨utze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of ACL. Beijing, China, pages 1793–1803. Stefan R¨ud, Massimiliano Ciaramita, Jens M¨uller, and Hinrich Sch¨utze. 2011. Piggyback: Using search engines for robust cross-domain named entity recognition. In Proceedings of ACL-HLT. Portland, Oregon, USA, pages 965–975. Ivan A Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword expressions: A pain in the neck for nlp. In International Conference on Intelligent Text Processing and Computational Linguistics. Mexico City, Mexico, pages 1–15. Bahar Salehi, Paul Cook, and Timothy Baldwin. 2015. A word embedding approach to predicting the compositionality of multiword expressions. In NAACLHTL. Denver, Colorado, pages 977–983. Hinrich Sch¨utze. 1998. Automatic word sense discrimination. Computational linguistics 24(1):97–123. Aliaksei Severyn, Massimo Nicosia, and Alessandro Moschitti. 2013. Learning semantic textual similarity with structural representations. In Proceedings of ACL (2). Sofia, Bulgaria, pages 714–718. Rion Snow, Sushant Prakash, Daniel Jurafsky, and Andrew Y. Ng. 2007. Learning to merge word senses. In Proceedings of EMNLP. Prague, Czech Republic, pages 1005–1014. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher Manning, Andrew Ng, and Christopher Potts. 2013. Parsing with compositional vector grammars. In Proceedings of EMNLP. Sofia, Bulgaria, pages 455–465. Simon ˇSuster, Ivan Titov, and Gertjan van Noord. 2016. Bilingual learning of multi-sense embeddings with discrete autoencoders. In Proceedings of NAACLHLT. San Diego, California, pages 1346–1356. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Porceedings of EMNLP. Lisbon, Portugal, pages 1422–1432. Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688. Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilistic model for learning multi-prototype word embeddings. In COLING. pages 151–160. Rocco Tripodi and Marcello Pelillo. 2017. A gametheoretic approach to word sense disambiguation. Computational Linguistics 43(1):31–70. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proceedings of EMNLP (2). Lisbon, Portugal, pages 2049–2054. Yulia Tsvetkov and Shuly Wintner. 2014. Identification of multiword expressions by combining multiple linguistic information sources. Computational Linguistics 40(2):449–468. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of ACL. Beijing, China, pages 323–333. Yijun Xiao and Kyunghyun Cho. 2016. Efficient character-level document classification by combining convolution and recurrent layers. CoRR abs/1602.00367. Shweta Yadav, Asif Ekbal, Sriparna Saha, and Pushpak Bhattacharyya. 2017. Entity extraction in biomedical corpora: An approach to evaluate word embedding features with pso based feature selection. In 1868 Proceedings of EACL. Valencia, Spain, pages 1159– 1170. Yiming Yang. 1999. An evaluation of statistical approaches to text categorization. Information retrieval 1(1-2):69–90. Jay Young, Valerio Basile, Lars Kunze, Elena Cabrio, and Nick Hawes. 2016. Towards lifelong object learning by integrating situated robot perception and semantic web mining. In Proceedings of the European Conference on Artificial Intelligence conference. The Hague, Netherland, pages 1458–1466. Zhi Zhong and Hwee Tou Ng. 2010. It Makes Sense: A wide-coverage Word Sense Disambiguation system for free text. In Proceedings of the ACL System Demonstrations. Uppsala, Sweden, pages 78–83. Will Y. Zou, Richard Socher, Daniel M. Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of EMNLP. Seattle, USA, pages 1393– 1398. 1869
2017
170
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1870–1879 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1171 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1870–1879 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1171 Reading Wikipedia to Answer Open-Domain Questions Danqi Chen∗ Computer Science Stanford University Stanford, CA 94305, USA [email protected] Adam Fisch, Jason Weston & Antoine Bordes Facebook AI Research 770 Broadway New York, NY 10003, USA {afisch,jase,abordes}@fb.com Abstract This paper proposes to tackle opendomain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task. 1 Introduction This paper considers the problem of answering factoid questions in an open-domain setting using Wikipedia as the unique knowledge source, such as one does when looking for answers in an encyclopedia. Wikipedia is a constantly evolving source of detailed information that could facilitate intelligent machines — if they are able to leverage its power. Unlike knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) or DBPedia (Auer et al., 2007), which are easier for computers to process but too sparsely populated for open-domain question answering (Miller et al., ∗Most of this work was done while DC was with Facebook AI Research. 2016), Wikipedia contains up-to-date knowledge that humans are interested in. It is designed, however, for humans – not machines – to read. Using Wikipedia articles as the knowledge source causes the task of question answering (QA) to combine the challenges of both large-scale open-domain QA and of machine comprehension of text. In order to answer any question, one must first retrieve the few relevant articles among more than 5 million items, and then scan them carefully to identify the answer. We term this setting, machine reading at scale (MRS). Our work treats Wikipedia as a collection of articles and does not rely on its internal graph structure. As a result, our approach is generic and could be switched to other collections of documents, books, or even daily updated newspapers. Large-scale QA systems like IBM’s DeepQA (Ferrucci et al., 2010) rely on multiple sources to answer: besides Wikipedia, it is also paired with KBs, dictionaries, and even news articles, books, etc. As a result, such systems heavily rely on information redundancy among the sources to answer correctly. Having a single knowledge source forces the model to be very precise while searching for an answer as the evidence might appear only once. This challenge thus encourages research in the ability of a machine to read, a key motivation for the machine comprehension subfield and the creation of datasets such as SQuAD (Rajpurkar et al., 2016), CNN/Daily Mail (Hermann et al., 2015) and CBT (Hill et al., 2016). However, those machine comprehension resources typically assume that a short piece of relevant text is already identified and given to the model, which is not realistic for building an opendomain QA system. In sharp contrast, methods that use KBs or information retrieval over documents have to employ search as an integral part of 1870 the solution. Instead MRS is focused on simultaneously maintaining the challenge of machine comprehension, which requires the deep understanding of text, while keeping the realistic constraint of searching over a large open resource. In this paper, we show how multiple existing QA datasets can be used to evaluate MRS by requiring an open-domain system to perform well on all of them at once. We develop DrQA, a strong system for question answering from Wikipedia composed of: (1) Document Retriever, a module using bigram hashing and TF-IDF matching designed to, given a question, efficiently return a subset of relevant articles and (2) Document Reader, a multi-layer recurrent neural network machine comprehension model trained to detect answer spans in those few returned documents. Figure 1 gives an illustration of DrQA. Our experiments show that Document Retriever outperforms the built-in Wikipedia search engine and that Document Reader reaches state-of-theart results on the very competitive SQuAD benchmark (Rajpurkar et al., 2016). Finally, our full system is evaluated using multiple benchmarks. In particular, we show that performance is improved across all datasets through the use of multitask learning and distant supervision compared to single task training. 2 Related Work Open-domain QA was originally defined as finding answers in collections of unstructured documents, following the setting of the annual TREC competitions1. With the development of KBs, many recent innovations have occurred in the context of QA from KBs with the creation of resources like WebQuestions (Berant et al., 2013) and SimpleQuestions (Bordes et al., 2015) based on the Freebase KB (Bollacker et al., 2008), or on automatically extracted KBs, e.g., OpenIE triples and NELL (Fader et al., 2014). However, KBs have inherent limitations (incompleteness, fixed schemas) that motivated researchers to return to the original setting of answering from raw text. A second motivation to cast a fresh look at this problem is that of machine comprehension of text, i.e., answering questions after reading a short text or story. That subfield has made considerable progress recently thanks to new deep learning architectures like attention-based and memory1http://trec.nist.gov/data/qamain.html augmented neural networks (Bahdanau et al., 2015; Weston et al., 2015; Graves et al., 2014) and release of new training and evaluation datasets like QuizBowl (Iyyer et al., 2014), CNN/Daily Mail based on news articles (Hermann et al., 2015), CBT based on children books (Hill et al., 2016), or SQuAD (Rajpurkar et al., 2016) and WikiReading (Hewlett et al., 2016), both based on Wikipedia. An objective of this paper is to test how such new methods can perform in an open-domain QA framework. QA using Wikipedia as a resource has been explored previously. Ryu et al. (2014) perform opendomain QA using a Wikipedia-based knowledge model. They combine article content with multiple other answer matching modules based on different types of semi-structured knowledge such as infoboxes, article structure, category structure, and definitions. Similarly, Ahn et al. (2004) also combine Wikipedia as a text resource with other resources, in this case with information retrieval over other documents. Buscaldi and Rosso (2006) also mine knowledge from Wikipedia for QA. Instead of using it as a resource for seeking answers to questions, they focus on validating answers returned by their QA system, and use Wikipedia categories for determining a set of patterns that should fit with the expected answer. In our work, we consider the comprehension of text only, and use Wikipedia text documents as the sole resource in order to emphasize the task of machine reading at scale, as described in the introduction. There are a number of highly developed full pipeline QA approaches using either the Web, as does QuASE (Sun et al., 2015), or Wikipedia as a resource, as do Microsoft’s AskMSR (Brill et al., 2002), IBM’s DeepQA (Ferrucci et al., 2010) and YodaQA (Baudiˇs, 2015; Baudiˇs and ˇSediv`y, 2015) — the latter of which is open source and hence reproducible for comparison purposes. AskMSR is a search-engine based QA system that relies on “data redundancy rather than sophisticated linguistic analyses of either questions or candidate answers”, i.e., it does not focus on machine comprehension, as we do. DeepQA is a very sophisticated system that relies on both unstructured information including text documents as well as structured data such as KBs, databases and ontologies to generate candidate answers or vote over evidence. YodaQA is an open source system modeled after DeepQA, similarly combining websites, 1871 Q: How many of Warsaw's inhabitants 
 spoke Polish in 1933? Document Reader 833,500 Document Retriever Open-domain QA
 SQuAD, TREC, WebQuestions, WikiMovies Figure 1: An overview of our question answering system DrQA. information extraction, databases and Wikipedia in particular. Our comprehension task is made more challenging by only using a single resource. Comparing against these methods provides a useful datapoint for an “upper bound” benchmark on performance. Multitask learning (Caruana, 1998) and task transfer have a rich history in machine learning (e.g., using ImageNet in the computer vision community (Huh et al., 2016)), as well as in NLP in particular (Collobert and Weston, 2008). Several works have attempted to combine multiple QA training datasets via multitask learning to (i) achieve improvement across the datasets via task transfer; and (ii) provide a single general system capable of asking different kinds of questions due to the inevitably different data distributions across the source datasets. Fader et al. (2014) used WebQuestions, TREC and WikiAnswers with four KBs as knowledge sources and reported improvement on the latter two datasets through multitask learning. Bordes et al. (2015) combined WebQuestions and SimpleQuestions using distant supervision with Freebase as the KB to give slight improvements on both datasets, although poor performance was reported when training on only one dataset and testing on the other, showing that task transfer is indeed a challenging subject; see also (Kadlec et al., 2016) for a similar conclusion. Our work follows similar themes, but in the setting of having to retrieve and then read text documents, rather than using a KB, with positive results. 3 Our System: DrQA In the following we describe our system DrQA for MRS which consists of two components: (1) the Document Retriever module for finding relevant articles and (2) a machine comprehension model, Document Reader, for extracting answers from a single document or a small collection of documents. 3.1 Document Retriever Following classical QA systems, we use an efficient (non-machine learning) document retrieval system to first narrow our search space and focus on reading only articles that are likely to be relevant. A simple inverted index lookup followed by term vector model scoring performs quite well on this task for many question types, compared to the built-in ElasticSearch based Wikipedia Search API (Gormley and Tong, 2015). Articles and questions are compared as TF-IDF weighted bag-ofword vectors. We further improve our system by taking local word order into account with n-gram features. Our best performing system uses bigram counts while preserving speed and memory efficiency by using the hashing of (Weinberger et al., 2009) to map the bigrams to 224 bins with an unsigned murmur3 hash. We use Document Retriever as the first part of our full model, by setting it to return 5 Wikipedia 1872 articles given any question. Those articles are then processed by Document Reader. 3.2 Document Reader Our Document Reader model is inspired by the recent success of neural network models on machine comprehension tasks, in a similar spirit to the AttentiveReader described in (Hermann et al., 2015; Chen et al., 2016). Given a question q consisting of l tokens {q1, . . . , ql} and a document or a small set of documents of n paragraphs where a single paragraph p consists of m tokens {p1, . . . , pm}, we develop an RNN model that we apply to each paragraph in turn and then finally aggregate the predicted answers. Our method works as follows: Paragraph encoding We first represent all tokens pi in a paragraph p as a sequence of feature vectors ˜pi ∈Rd and pass them as the input to a recurrent neural network and thus obtain: {p1, . . . , pm} = RNN({˜p1, . . . , ˜pm}), where pi is expected to encode useful context information around token pi. Specifically, we choose to use a multi-layer bidirectional long short-term memory network (LSTM), and take pi as the concatenation of each layer’s hidden units in the end. The feature vector ˜pi is comprised of the following parts: • Word embeddings: femb(pi) = E(pi). We use the 300-dimensional Glove word embeddings trained from 840B Web crawl data (Pennington et al., 2014). We keep most of the pre-trained word embeddings fixed and only fine-tune the 1000 most frequent question words because the representations of some key words such as what, how, which, many could be crucial for QA systems. • Exact match: fexact match(pi) = I(pi ∈q). We use three simple binary features, indicating whether pi can be exactly matched to one question word in q, either in its original, lowercase or lemma form. These simple features turn out to be extremely helpful, as we will show in Section 5. • Token features: ftoken(pi) = (POS(pi), NER(pi), TF(pi)). We also add a few manual features which reflect some properties of token pi in its context, which include its part-of-speech (POS) and named entity recognition (NER) tags and its (normalized) term frequency (TF). • Aligned question embedding: Following (Lee et al., 2016) and other recent works, the last part we incorporate is an aligned question embedding falign(pi) = P j ai,jE(qj), where the attention score ai,j captures the similarity between pi and each question words qj. Specifically, ai,j is computed by the dot products between nonlinear mappings of word embeddings: ai,j = exp (α(E(pi)) · α(E(qj))) P j′ exp α(E(pi)) · α(E(qj′)) , and α(·) is a single dense layer with ReLU nonlinearity. Compared to the exact match features, these features add soft alignments between similar but non-identical words (e.g., car and vehicle). Question encoding The question encoding is simpler, as we only apply another recurrent neural network on top of the word embeddings of qi and combine the resulting hidden units into one single vector: {q1, . . . , ql} →q. We compute q = P j bjqj where bj encodes the importance of each question word: bj = exp(w · qj) P j′ exp(w · qj′), and w is a weight vector to learn. Prediction At the paragraph level, the goal is to predict the span of tokens that is most likely the correct answer. We take the the paragraph vectors {p1, . . . , pm} and the question vector q as input, and simply train two classifiers independently for predicting the two ends of the span. Concretely, we use a bilinear term to capture the similarity between pi and q and compute the probabilities of each token being start and end as: Pstart(i) ∝ exp (piWsq) Pend(i) ∝ exp (piWeq) During prediction, we choose the best span from token i to token i′ such that i ≤i′ ≤i + 15 and Pstart(i)×Pend(i′) is maximized. To make scores 1873 compatible across paragraphs in one or several retrieved documents, we use the unnormalized exponential and take argmax over all considered paragraph spans for our final prediction. 4 Data Our work relies on three types of data: (1) Wikipedia that serves as our knowledge source for finding answers, (2) the SQuAD dataset which is our main resource to train Document Reader and (3) three more QA datasets (CuratedTREC, WebQuestions and WikiMovies) that in addition to SQuAD, are used to test the open-domain QA abilities of our full system, and to evaluate the ability of our model to learn from multitask learning and distant supervision. Statistics of the datasets are given in Table 2. 4.1 Wikipedia (Knowledge Source) We use the 2016-12-21 dump2 of English Wikipedia for all of our full-scale experiments as the knowledge source used to answer questions. For each page, only the plain text is extracted and all structured data sections such as lists and figures are stripped.3 After discarding internal disambiguation, list, index, and outline pages, we retain 5,075,182 articles consisting of 9,008,962 unique uncased token types. 4.2 SQuAD The Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) is a dataset for machine comprehension based on Wikipedia. The dataset contains 87k examples for training and 10k for development, with a large hidden test set which can only be accessed by the SQuAD creators. Each example is composed of a paragraph extracted from a Wikipedia article and an associated human-generated question. The answer is always a span from this paragraph and a model is given credit if its predicted answer matches it. Two evaluation metrics are used: exact string match (EM) and F1 score, which measures the weighted average of precision and recall at the token level. In the following, we use SQuAD for training and evaluating our Document Reader for the standard machine comprehension task given the rel2https://dumps.wikimedia.org/enwiki/ latest 3We use the WikiExtractor script: https://github. com/attardi/wikiextractor. evant paragraph as defined in (Rajpurkar et al., 2016). For the task of evaluating open-domain question answering over Wikipedia, we use the SQuAD development set QA pairs only, and we ask systems to uncover the correct answer spans without having access to the associated paragraphs. That is, a model is required to answer a question given the whole of Wikipedia as a resource; it is not given the relevant paragraph as in the standard SQuAD setting. 4.3 Open-domain QA Evaluation Resources SQuAD is one of the largest general purpose QA datasets currently available. SQuAD questions have been collected via a process involving showing a paragraph to each human annotator and asking them to write a question. As a result, their distribution is quite specific. We hence propose to train and evaluate our system on other datasets developed for open-domain QA that have been constructed in different ways (not necessarily in the context of answering from Wikipedia). CuratedTREC This dataset is based on the benchmarks from the TREC QA tasks that have been curated by Baudiˇs and ˇSediv`y (2015). We use the large version, which contains a total of 2,180 questions extracted from the datasets from TREC 1999, 2000, 2001 and 2002.4 WebQuestions Introduced in (Berant et al., 2013), this dataset is built to answer questions from the Freebase KB. It was created by crawling questions through the Google Suggest API, and then obtaining answers using Amazon Mechanical Turk. We convert each answer to text by using entity names so that the dataset does not reference Freebase IDs and is purely made of plain text question-answer pairs. WikiMovies This dataset, introduced in (Miller et al., 2016), contains 96k question-answer pairs in the domain of movies. Originally created from the OMDb and MovieLens databases, the examples are built such that they can also be answered by using a subset of Wikipedia as the knowledge source (the title and the first section of articles from the movie domain). 4This dataset is available at https://github.com/ brmson/dataset-factoid-curated. 1874 Dataset Example Article / Paragraph SQuAD Q: How many provinces did the Ottoman empire contain in the 17th century? A: 32 Article: Ottoman Empire Paragraph: ... At the beginning of the 17th century the empire contained 32 provinces and numerous vassal states. Some of these were later absorbed into the Ottoman Empire, while others were granted various types of autonomy during the course of centuries. CuratedTREC Q: What U.S. state’s motto is “Live free or Die”? A: New Hampshire Article: Live Free or Die Paragraph: ”Live Free or Die” is the official motto of the U.S. state of New Hampshire, adopted by the state in 1945. It is possibly the best-known of all state mottos, partly because it conveys an assertive independence historically found in American political philosophy and partly because of its contrast to the milder sentiments found in other state mottos. WebQuestions Q: What part of the atom did Chadwick discover?† A: neutron Article: Atom Paragraph: ... The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. ... WikiMovies Q: Who wrote the film Gigli? A: Martin Brest Article: Gigli Paragraph: Gigli is a 2003 American romantic comedy film written and directed by Martin Brest and starring Ben Affleck, Jennifer Lopez, Justin Bartha, Al Pacino, Christopher Walken, and Lainie Kazan. Table 1: Example training data from each QA dataset. In each case we show an associated paragraph where distant supervision (DS) correctly identified the answer within it, which is highlighted. Dataset Train Test Plain DS SQuAD 87,599 71,231 10,570† CuratedTREC 1,486∗ 3,464 694 WebQuestions 3,778∗ 4,602 2,032 WikiMovies 96,185∗36,301 9,952 Table 2: Number of questions for each dataset used in this paper. DS: distantly supervised training data. ∗: These training sets are not used as is because no paragraph is associated with each question. †: Corresponds to SQuAD development set. 4.4 Distantly Supervised Data All the QA datasets presented above contain training portions, but CuratedTREC, WebQuestions and WikiMovies only contain question-answer pairs, and not an associated document or paragraph as in SQuAD, and hence cannot be used for training Document Reader directly. Following previous work on distant supervision (DS) for relation extraction (Mintz et al., 2009), we use a procedure to automatically associate paragraphs to such training examples, and then add these examples to our training set. We use the following process for each questionanswer pair to build our training set. First, we Dataset Wiki Doc. Retriever Search plain +bigrams SQuAD 62.7 76.1 77.8 CuratedTREC 81.0 85.2 86.0 WebQuestions 73.7 75.5 74.4 WikiMovies 61.7 54.4 70.3 Table 3: Document retrieval results. % of questions for which the answer segment appears in one of the top 5 pages returned by the method. run Document Retriever on the question to retrieve the top 5 Wikipedia articles. All paragraphs from those articles without an exact match of the known answer are directly discarded. All paragraphs shorter than 25 or longer than 1500 characters are also filtered out. If any named entities are detected in the question, we remove any paragraph that does not contain them at all. For every remaining paragraph in each retrieved page, we score all positions that match an answer using unigram and bigram overlap between the question and a 20 token window, keeping up to the top 5 paragraphs with the highest overlaps. If there is no paragraph with non-zero overlap, the example is discarded; otherwise we add each found pair to our DS training dataset. Some examples are shown in Table 1 and data statistics are given in Table 2. 1875 Note that we can also generate additional DS data for SQuAD by trying to find mentions of the answers not just in the paragraph provided, but also from other pages or the same page that the given paragraph was in. We observe that around half of the DS examples come from pages outside of the articles used in SQuAD. 5 Experiments This section first presents evaluations of our Document Retriever and Document Reader modules separately, and then describes tests of their combination, DrQA, for open-domain QA on the full Wikipedia. 5.1 Finding Relevant Articles We first examine the performance of our Document Retriever module on all the QA datasets. Table 3 compares the performance of the two approaches described in Section 3.1 with that of the Wikipedia Search Engine5 for the task of finding articles that contain the answer given a question. Specifically, we compute the ratio of questions for which the text span of any of their associated answers appear in at least one the top 5 relevant pages returned by each system. Results on all datasets indicate that our simple approach outperforms Wikipedia Search, especially with bigram hashing. We also compare doing retrieval with Okapi BM25 or by using cosine distance in the word embeddings space (by encoding questions and articles as bag-of-embeddings), both of which we find performed worse. 5.2 Reader Evaluation on SQuAD Next we evaluate our Document Reader component on the standard SQuAD evaluation (Rajpurkar et al., 2016). Implementation details We use 3-layer bidirectional LSTMs with h = 128 hidden units for both paragraph and question encoding. We apply the Stanford CoreNLP toolkit (Manning et al., 2014) for tokenization and also generating lemma, partof-speech, and named entity tags. Lastly, all the training examples are sorted by the length of paragraph and divided into minibatches of 32 examples each. We use Adamax for optimization as described in (Kingma and Ba, 5We use the Wikipedia Search API https://www. mediawiki.org/wiki/API:Search. 2014). Dropout with p = 0.3 is applied to word embeddings and all the hidden units of LSTMs. Result and analysis Table 4 presents our evaluation results on both development and test sets. SQuAD has been a very competitive machine comprehension benchmark since its creation and we only list the best-performing systems in the table. Our system (single model) can achieve 70.0% exact match and 79.0% F1 scores on the test set, which surpasses all the published results and can match the top performance on the SQuAD leaderboard at the time of writing. Additionally, we think that our model is conceptually simpler than most of the existing systems. We conducted an ablation analysis on the feature vector of paragraph tokens. As shown in Table 5 all the features contribute to the performance of our final system. Without the aligned question embedding feature (only word embedding and a few manual features), our system is still able to achieve F1 over 77%. More interestingly, if we remove both faligned and fexact match, the performance drops dramatically, so we conclude that both features play a similar but complementary role in the feature representation related to the paraphrased nature of a question vs. the context around an answer. 5.3 Full Wikipedia Question Answering Finally, we assess the performance of our full system DrQA for answering open-domain questions using the four datasets introduced in Section 4. We compare three versions of DrQA which evaluate the impact of using distant supervision and multitask learning across the training sources provided to Document Reader (Document Retriever remains the same for each case): • SQuAD: A single Document Reader model is trained on the SQuAD training set only and used on all evaluation sets. • Fine-tune (DS): A Document Reader model is pre-trained on SQuAD and then fine-tuned for each dataset independently using its distant supervision (DS) training set. • Multitask (DS): A single Document Reader model is jointly trained on the SQuAD training set and all the DS sources. For the full Wikipedia setting we use a streamlined model that does not use the CoreNLP parsed ftoken features or lemmas for fexact match. We 1876 Method Dev Test EM F1 EM F1 Dynamic Coattention Networks (Xiong et al., 2016) 65.4 75.6 66.2 75.9 Multi-Perspective Matching (Wang et al., 2016)† 66.1 75.8 65.5 75.1 BiDAF (Seo et al., 2016) 67.7 77.3 68.0 77.3 R-net† n/a n/a 71.3 79.7 DrQA (Our model, Document Reader Only) 69.5 78.8 70.0 79.0 Table 4: Evaluation results on the SQuAD dataset (single model only). †: Test results reflect the SQuAD leaderboard (https://stanford-qa.com) as of Feb 6, 2017. Features F1 Full 78.8 No ftoken 78.0 (-0.8) No fexact match 77.3 (-1.5) No faligned 77.3 (-1.5) No faligned and fexact match 59.4 (-19.4) Table 5: Feature ablation analysis of the paragraph representations of our Document Reader. Results are reported on the SQuAD development set. find that while these help for more exact paragraph reading in SQuAD, they don’t improve results in the full setting. Additionally, WebQuestions and WikiMovies provide a list of candidate answers (e.g., 1.6 million Freebase entity strings for WebQuestions) and we restrict the answer span must be in this list during prediction. Results Table 6 presents the results. Despite the difficulty of the task compared to machine comprehension (where you are given the right paragraph) and unconstrained QA (using redundant resources), DrQA still provides reasonable performance across all four datasets. We are interested in a single, full system that can answer any question using Wikipedia. The single model trained only on SQuAD is outperformed on all four of the datasets by the multitask model that uses distant supervision. However performance when training on SQuAD alone is not far behind, indicating that task transfer is occurring. The majority of the improvement from SQuAD to Multitask (DS) however is likely not from task transfer as fine-tuning on each dataset alone using DS also gives improvements, showing that is is the introduction of extra data in the same domain that helps. Nevertheless, the best single model that we can find is our overall goal, and that is the Multitask (DS) system. We compare to an unconstrained QA system using redundant resources (not just Wikipedia), YodaQA (Baudiˇs, 2015), giving results which were previously reported on CuratedTREC and WebQuestions. Despite the increased difficulty of our task, it is reassuring that our performance is not too far behind on CuratedTREC (31.3 vs. 25.4). The gap is slightly bigger on WebQuestions, likely because this dataset was created from the specific structure of Freebase which YodaQA uses directly. DrQA’s performance on SQuAD compared to its Document Reader component on machine comprehension in Table 4 shows a large drop (from 69.5 to 27.1) as we now are given Wikipedia to read, not a single paragraph. Given the correct document (but not the paragraph) we can achieve 49.4, indicating many false positives come from highly topical sentences. This is despite the fact that the Document Retriever works relatively well (77.8% of the time retrieving the answer, see Table 3). It is worth noting that a large part of the drop comes from the nature of the SQuAD questions. They were written with a specific paragraph in mind, thus their language can be ambiguous when the context is removed. Additional resources other than SQuAD, specifically designed for MRS, might be needed to go further. 6 Conclusion We studied the task of machine reading at scale, by using Wikipedia as the unique knowledge source for open-domain QA. Our results indicate that MRS is a key challenging task for researchers to focus on. Machine comprehension systems alone cannot solve the overall task. Our method integrates search, distant supervision, and multitask learning to provide an effective complete system. Evaluating the individual components as well as the full system across multiple benchmarks showed the efficacy of our approach. 1877 Dataset YodaQA DrQA SQuAD +Fine-tune (DS) +Multitask (DS) SQuAD (All Wikipedia) n/a 27.1 28.4 29.8 CuratedTREC 31.3 19.7 25.7 25.4 WebQuestions 39.8 11.8 19.5 20.7 WikiMovies n/a 24.5 34.3 36.5 Table 6: Full Wikipedia results. Top-1 exact-match accuracy (in %, using SQuAD eval script). +Finetune (DS): Document Reader models trained on SQuAD and fine-tuned on each DS training set independently. +Multitask (DS): Document Reader single model trained on SQuAD and all the distant supervision (DS) training sets jointly. YodaQA results are extracted from https://github.com/brmson/ yodaqa/wiki/Benchmarks and use additional resources such as Freebase and DBpedia, see Section 2. Future work should aim to improve over our DrQA system. Two obvious angles of attack are: (i) incorporate the fact that Document Reader aggregates over multiple paragraphs and documents directly in the training, as it currently trains on paragraphs independently; and (ii) perform endto-end training across the Document Retriever and Document Reader pipeline, rather than independent systems. Acknowledgments The authors thank Pranav Rajpurkar for testing Document Reader on the test set of SQuAD. References David Ahn, Valentin Jijkoun, Gilad Mishne, Karin Mller, Maarten de Rijke, and Stefan Schlobach. 2004. Using wikipedia at the trec qa track. In Proceedings of TREC 2004. S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, Springer, pages 722–735. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR). Petr Baudiˇs. 2015. YodaQA: a modular question answering system pipeline. In POSTER 2015-19th International Student Conference on Electrical Engineering. pages 1156–1165. Petr Baudiˇs and Jan ˇSediv`y. 2015. Modeling of the question answering task in the YodaQA system. In International Conference of the CrossLanguage Evaluation Forum for European Languages. Springer, pages 222–228. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). pages 1533–1544. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247–1250. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 . Eric Brill, Susan Dumais, and Michele Banko. 2002. An analysis of the AskMSR question-answering system. In Empirical Methods in Natural Language Processing (EMNLP). pages 257–264. Davide Buscaldi and Paolo Rosso. 2006. Mining knowledge from Wikipedia for the question answering task. In International Conference on Language Resources and Evaluation (LREC). pages 727–730. Rich Caruana. 1998. Multitask learning. In Learning to learn, Springer, pages 95–133. Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the CNN/Daily Mail reading comprehension task. In Association for Computational Linguistics (ACL). Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In International Conference on Machine Learning (ICML). Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In ACM SIGKDD international conference on Knowledge discovery and data mining. pages 1156–1165. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building Watson: An overview of the DeepQA project. AI magazine 31(3):59–79. 1878 Clinton Gormley and Zachary Tong. 2015. Elasticsearch: The Definitive Guide. ” O’Reilly Media, Inc.”. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401 . Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS). Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over wikipedia. In Association for Computational Linguistics (ACL). pages 1535–1545. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The Goldilocks Principle: Reading children’s books with explicit memory representations. In International Conference on Learning Representations (ICLR). Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. 2016. What makes ImageNet good for transfer learning? arXiv preprint arXiv:1608.08614 . Mohit Iyyer, Jordan L Boyd-Graber, Leonardo Max Batista Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid question answering over paragraphs. In Empirical Methods in Natural Language Processing (EMNLP). pages 633–644. Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2016. From particular to general: A preliminary case study of transfer learning in reading comprehension. Machine Intelligence Workshop, NIPS . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Dipanjan Das. 2016. Learning recurrent span representations for extractive question answering. arXiv preprint arXiv:1611.01436 . Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Association for Computational Linguistics (ACL). pages 55–60. Alexander H. Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In Empirical Methods in Natural Language Processing (EMNLP). pages 1400– 1409. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL/IJCNLP). pages 1003–1011. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532–1543. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP). Pum-Mo Ryu, Myung-Gil Jang, and Hyun-Ki Kim. 2014. Open domain question answering using Wikipedia-based knowledge model. Information Processing & Management 50(5):683–692. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 . Huan Sun, Hao Ma, Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and Ming-Wei Chang. 2015. Open domain question answering via semantic enrichment. In Proceedings of the 24th International Conference on World Wide Web. ACM, pages 1045–1055. Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211 . Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. 2009. Feature hashing for large scale multitask learning. In International Conference on Machine Learning (ICML). pages 1113–1120. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In International Conference on Learning Representations (ICLR). Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . 1879
2017
171
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1880–1890 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1172 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1880–1890 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1172 Learning to Skim Text Adams Wei Yu∗ Carnegie Mellon University [email protected] Hongrae Lee Google [email protected] Quoc V. Le Google [email protected] Abstract Recurrent Neural Networks are showing much promise in many sub-areas of natural language processing, ranging from document classification to machine translation to automatic question answering. Despite their promise, many recurrent models have to read the whole text word by word, making it slow to handle long documents. For example, it is difficult to use a recurrent network to read a book and answer questions about it. In this paper, we present an approach of reading text while skipping irrelevant information if needed. The underlying model is a recurrent network that learns how far to jump after reading a few words of the input text. We employ a standard policy gradient method to train the model to make discrete jumping decisions. In our benchmarks on four different tasks, including number prediction, sentiment analysis, news article classification and automatic Q&A, our proposed model, a modified LSTM with jumping, is up to 6 times faster than the standard sequential LSTM, while maintaining the same or even better accuracy. 1 Introduction The last few years have seen much success of applying neural networks to many important applications in natural language processing, e.g., partof-speech tagging, chunking, named entity recognition (Collobert et al., 2011), sentiment analysis (Socher et al., 2011, 2013), document classification (Kim, 2014; Le and Mikolov, 2014; Zhang et al., 2015; Dai and Le, 2015), machine translation (Kalchbrenner and Blunsom, 2013; Sutskever ∗Most of work was done when AWY was with Google. et al., 2014; Bahdanau et al., 2014; Sennrich et al., 2015; Wu et al., 2016), conversational/dialogue modeling (Sordoni et al., 2015; Vinyals and Le, 2015; Shang et al., 2015), document summarization (Rush et al., 2015; Nallapati et al., 2016), parsing (Andor et al., 2016) and automatic question answering (Q&A) (Weston et al., 2015; Hermann et al., 2015; Wang and Jiang, 2016; Wang et al., 2016; Trischler et al., 2016; Lee et al., 2016; Seo et al., 2016; Xiong et al., 2016). An important characteristic of all these models is that they read all the text available to them. While it is essential for certain applications, such as machine translation, this characteristic also makes it slow to apply these models to scenarios that have long input text, such as document classification or automatic Q&A. However, the fact that texts are usually written with redundancy inspires us to think about the possibility of reading selectively. In this paper, we consider the problem of understanding documents with partial reading, and propose a modification to the basic neural architectures that allows them to read input text with skipping. The main benefit of this approach is faster inference because it skips irrelevant information. An unexpected benefit of this approach is that it also helps the models generalize better. In our approach, the model is a recurrent network, which learns to predict the number of jumping steps after it reads one or several input tokens. Such a discrete model is therefore not fully differentiable, but it can be trained by a standard policy gradient algorithm, where the reward can be the accuracy or its proxy during training. In our experiments, we use the basic LSTM recurrent networks (Hochreiter and Schmidhuber, 1997) as the base model and benchmark the proposed algorithm on a range of document classification or reading comprehension tasks, using various datasets such as Rotten Tomatoes (Pang 1880 Figure 1: A synthetic example of the proposed model to process a text document. In this example, the maximum size of jump K is 5, the number of tokens read before a jump R is 2 and the number of jumps allowed N is 10. The green softmax are for jumping predictions. The processing stops if a) the jumping softmax predicts a 0 or b) the jump times exceeds N or c) the network processed the last token. We only show the case a) in this figure. and Lee, 2005), IMDB (Maas et al., 2011), AG News (Zhang et al., 2015) and Children’s Book Test (Hill et al., 2015). We find that the proposed approach of selective reading speeds up the base model by two to six times. Surprisingly, we also observe our model beats the standard LSTM in terms of accuracy. In summary, the main contribution of our work is to design an architecture that learns to skim text and show that it is both faster and more accurate in practical applications of text processing. Our model is simple and flexible enough that we anticipate it would be able to incorporate to recurrent nets with more sophisticated structures to achieve even better performance in the future. 2 Methodology In this section, we introduce the proposed model named LSTM-Jump. We first describe its main structure, followed by the difficulty of estimating part of the model parameters because of nondifferentiability. To address this issue, we appeal to a reinforcement learning formulation and adopt a policy gradient method. 2.1 Model Overview The main architecture of the proposed model is shown in Figure 1, which is based on an LSTM recurrent neural network. Before training, the number of jumps allowed N, the number of tokens read between every two jumps R and the maximum size of jumping K are chosen ahead of time. While K is a fixed parameter of the model, N and R are hyperparameters that can vary between training and testing. Also, throughout the paper, we would use d1:p to denote a sequence d1, d2, ..., dp. In the following, we describe in detail how the model operates when processing text. Given a training example x1:T , the recurrent network will read the embedding of the first R tokens x1:R and output the hidden state. Then this state is used to compute the jumping softmax that determines a distribution over the jumping steps between 1 and K. The model then samples from this distribution a jumping step, which is used to decide the next token to be read into the model. Let κ be the sampled value, then the next starting token is xR+κ. Such process continues until either a) the jump softmax samples a 0; or b) the number of jumps exceeds N; or c) the model reaches the last token xT . After stopping, as the output, the latest hidden state is further used for predicting desired targets. How to leverage the hidden state depends on the specifics of the task at hand. For example, for classification problems in Section 3.1, 3.2 and 3.3, it is directly applied to produce a softmax for classification, while in automatic Q&A problem of Section 3.4, it is used to compute the correlation with the candidate answers in order to select the best one. Figure 1 gives an example with K = 5, R = 2 and N = 10 terminating on condition a). 2.2 Training with REINFORCE Our goal for training is to estimate the parameters of LSTM and possibly word embedding, which are denoted as θm, together with the jumping action parameters θa. Once obtained, they can be used for inference. The estimation of θm is straightforward in the tasks that can be reduced as classification problems (which is essentially what our experiments cover), as the cross entropy objective J1(θm) is 1881 differentiable over θm that we can directly apply backpropagation to minimize. However, the nature of discrete jumping decisions made at every step makes it difficult to estimate θa, as cross entropy is no longer differentiable over θa. Therefore, we formulate it as a reinforcement learning problem and apply policy gradient method to train the model. Specifically, we need to maximize a reward function over θa which can be constructed as follows. Let j1:N be the jumping action sequence during the training with an example x1:T . Suppose hi is a hidden state of the LSTM right before the i-th jump ji,1 then it is a function of j1:i−1 and thus can be denoted as hi(j1:i−1). Now the jump is attained by sampling from the multinomial distribution p(ji|hi(j1:i−1); θa), which is determined by the jump softmax. We can receive a reward R after processing x1:T under the current jumping strategy.2 The reward should be positive if the output is favorable or non-positive otherwise. In our experiments, we choose R =  1 if prediction correct; −1 otherwise. Then the objective function of θa we want to maximize is the expected reward under the distribution defined by the current jumping policy, i.e., J2(θa) = Ep(j1:N;θa)[R]. (1) where p(j1:N; θa) = Q i p(j1:i|hi(j1:i−1); θa). Optimizing this objective numerically requires computing its gradient, whose exact value is intractable to obtain as the expectation is over high dimensional interaction sequences. By running S examples, an approximated gradient can be computed by the following REINFORCE algorithm (Williams, 1992): ∇θaJ2(θa) = N X i=1 Ep(j1:N;θa)[∇θa log p(j1:i|hi; θa)R] ≈1 S S X s=1 N X i=1 [∇θa log p(js 1:i|hs i; θa)Rs] where the superscript s denotes a quantity belonging to the s-th example. Now the term 1The i-th jumping step is usually not xi. 2In the general case, one may receive (discounted) intermediate rewards after each jump. But in our case, we only consider final reward. It is equivalent to a special case that all intermediate rewards are identical and without discount. ∇θa log p(j1:i|hi; θa) can be computed by standard backpropagation. Although the above estimation of ∇θaJ2(θa) is unbiased, it may have very high variance. One widely used remedy to reduce the variance is to subtract a baseline value bs i from the reward Rs, such that the approximated gradient becomes ∇θaJ2(θa) ≈1 S S X s=1 N X i=1 [∇θa log p(js 1:i|hs i; θ)(Rs−bs i)] It is shown (Williams, 1992; Zaremba and Sutskever, 2015) that any number bs i will yield an unbiased estimation. Here, we adopt the strategy of Mnih et al. (2014) that bs i = wbhs i + cb and the parameter θb = {wb, cb} is learned by minimizing (Rs −bs i)2. Now the final objective to minimize is J(θm, θa, θb) = J1(θm)−J2(θa)+ S X s=1 N X i=1 (Rs−bs i)2, which is fully differentiable and can be solved by standard backpropagation. 2.3 Inference During inference, we can either use sampling or greedy evaluation by selecting the most probable jumping step suggested by the jump softmax and follow that path. In the our experiments, we will adopt the sampling scheme. 3 Experimental Results In this section, we present our empirical studies to understand the efficiency of the proposed model in reading text. The tasks under experimentation are: synthetic number prediction, sentiment analysis, news topic classification and automatic question answering. Those, except the first one, are representative tasks in text reading involving different sizes of datasets and various levels of text processing, from character to word and to sentence. Table 1 summarizes the statistics of the dataset in our experiments. To exclude the potential impact of advanced models, we restrict our comparison between the vanilla LSTM (Hochreiter and Schmidhuber, 1997) and our model, which is referred to as LSTM-Jump. In a nutshell, we show that, while achieving the same or even better testing accuracy, our model is up to 6 times and 66 times faster than the baseline LSTM model in real and synthetic 1882 Task Dataset Level Vocab AvgLen #train #valid #test #class Number Prediction synthetic word 100 100 words 1M 10K 10K 100 Sentiment Analysis Rotten Tomatoes word 18,764 22 words 8,835 1,079 1,030 2 Sentiment Analysis IMDB word 112,540 241 words 21,143 3,857 25,000 2 News Classification AG character 70 200 characters 101,851 18,149 7,600 4 Q/A Children Book Test-NE sentence 53,063 20 sentences 108,719 2,000 2,500 10 Q/A Children Book Test-CN sentence 53,185 20 sentences 120,769 2,000 2,500 10 Table 1: Task and dataset statistics. datasets, respectively, as we are able to selectively skip a large fraction of text. In fact, the proposed model can be readily extended to other recurrent neural networks with sophisticated mechanisms such as attention and/or hierarchical structure to achieve higher accuracy than those presented below. However, this is orthogonal to the main focus of this work and would be left as an interesting future work. General Experiment Settings We use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 in all experiments. We also apply gradient clipping to all the trainable variables with the threshold of 1.0. The dropout rate between the LSTM layers is 0.2 and the embedding dropout rate is 0.1. We repeat the notations N, K, R defined previously in Table 2, so readers can easily refer to when looking at Tables 4,5,6 and 7. While K is fixed during both training and testing, we would fix R and N at training but vary their values during test to see the impact of parameter changes. Note that N is essentially a constraint which can be relaxed. Yet we prefer to enforce this constraint here to let the model learn to read fewer tokens. Finally, the reported test time is measured by running one pass of the whole test set instance by instance, and the speedup is over the base LSTM model. The code is written with TensorFlow.3 Notation Meaning N number of jumps allowed K maximum size of jumping R number of tokens read before a jump Table 2: Notations referred to in experiments. 3.1 Number Prediction with a Synthetic Dataset We first test whether LSTM-Jump is indeed able to learn how to jump if a very clear jumping sig3https://www.tensorflow.org/ nal is given in the text. The input of the task is a sequence of L positive integers x0:T−1 and the output is simply xx0. That is, the output is chosen from the input sequence, with index determined by x0 . Here are two examples to illustrate the idea: input1 : 4, 5, 1, 7, 6, 2. output1 : 6 input2 : 2, 4, 9, 4, 5, 6. output2 : 9 One can see that x0 is essentially the oracle jumping signal, i.e. the indicator of how many steps the reading should jump to get the exact output and obviously, the remaining number of the sequence are useless. After reading the first token, a “smart” network should be able to learn from the training examples to jump to the output position, skipping the rest. We generate 1 million training and 10,000 validation examples with the rule above, each with sequence length T = 100. We also impose 1 ≤x0 < T to ensure the index is valid. We find that directly training the LSTM-Jump with full sequence is unlikely to converge, therefore, we adopt a curriculum training scheme. More specifically, we generate sequences with lengths {10, 20, 30, 40, 50, 60, 70, 80, 90, 100} and train the model starting from the shortest. Whenever the training accuracy reaches a threshold, we shift to longer sequences. We also train an LSTM with the same curriculum training scheme. The training stops when the validation accuracy is larger than 98%. We choose such stopping criterion simply because it is the highest that both models can achieve.4 All the networks are single layered, with hidden size 512, embedding size 32 and batch size 100. During testing, we generate sequences of lengths 10, 100 and 1000 with the same rule, each having 10,000 examples. As the training size is large enough, we do not have to worry about overfitting so dropout is not applied. In fact, we find that the training, validation and testing accuracies are almost the same. 4In fact, our model can get higher but we stick to 98% for ease of comparison. 1883 Seq length LSTM-Jump LSTM Speedup Test accuracy 10 98% 96% n/a 100 98% 96% n/a 1000 90% 80% n/a Test time (Avg tokens read) 10 13.5s (2.1) 18.9s (10) 1.40x 100 13.9s (2.2) 120.4s (100) 8.66x 1000 18.9s (3.0) 1250s (1000) 66.14x Table 3: Testing accuracy and time of synthetic number prediction problem. The jumping level is number. The results of LSTM and our method, LSTMJump, are shown in Table 3. The first observation is that LSTM-Jump is faster than LSTM; the longer the sequence is, the more significant speedup LSTM-Jump can gain. This is because the well-trained LSTM-Jump is aware of the jumping signal at the first token and hence can directly jump to the output position to make prediction, while LSTM is agnostic to the signal and has to read the whole sequence. As a result, the reading speed of LSTM-Jump is hardly affected by the length of sequence, but that of LSTM is linear with respect to length. Besides, LSTM-Jump also outperforms LSTM in terms of test accuracy under all cases. This is not surprising either, as LSTM has to read a large amount of tokens that are potentially not helpful and could interfere with the prediction. In summary, the results indicate LSTM-Jump is able to learn to jump if the signal is clear. 3.2 Word Level Sentiment Analysis with Rotten Tomatoes and IMDB datasets As LSTM-Jump has shown great speedups in the synthetic dataset, we would like to understand whether it could carry this benefit to real-world data, where “jumping” signal is not explicit. So in this section, we conduct sentiment analysis on two movie review datasets, both containing equal numbers of positive and negative reviews. The first dataset is Rotten Tomatoes, which contains 10,662 documents. Since there is not a standard split, we randomly select around 80% for training, 10% for validation, and 10% for testing. The average and maximum lengths of the reviews are 22 and 56 words respectively, and we pad each of them to 60. We choose the pre-trained word2vec embeddings5 (Mikolov et al., 2013) as 5https://code.google.com/archive/p/ word2vec/ our fixed word embedding that we do not update this matrix during training. Both LSTM-Jump and LSTM contain 2 layers, 256 hidden units and the batch size is 100. As the amount of training data is small, we slightly augment the data by sampling a continuous 50-word sequence in each padded reviews as one training sample. During training, we enforce LSTM-Jump to read 8 tokens before a jump (R = 8), and the maximum skipping tokens per jump is 10 (K = 10), while the number of jumps allowed is 3 (N = 3). The testing result is reported in Table 4. In a nutshell, LSTM-Jump is always faster than LSTM under different combinations of R and N. At the same time, the accuracy is on par with that of LSTM. In particular, the combination of (R, N) = (7, 4) even achieves slightly better accuracy than LSTM while having a 1.5x speedup. Model (R, N) Accuracy Time Speedup LSTM-Jump (9, 2) 0.783 6.3s 1.98x (8, 3) 0.789 7.3s 1.71x (7, 4) 0.793 8.1s 1.54x LSTM n/a 0.791 12.5s 1x Table 4: Testing time and accuracy on the Rotten Tomatoes review classification dataset. The maximum size of jumping K is set to 10 for all the settings. The jumping level is word. The second dataset is IMDB (Maas et al., 2011),6 which contains 25,000 training and 25,000 testing movie reviews, where the average length of text is 240 words, much longer than that of Rotten Tomatoes. We randomly set aside about 15% of training data as validation set. Both LSTM-Jump and LSTM has one layer and 128 hidden units, and the batch size is 50. Again, we use pretrained word2vec embeddings as initialization but they are updated during training. We either pad a short sequence to 400 words or randomly select a 400word segment from a long sequence as a training example. During training, we set R = 20, K = 40 and N = 5. As Table 5 shows, the result exhibits a similar trend as found in Rotten Tomatoes that LSTMJump is uniformly faster than LSTM under many settings. The various (R, N) combinations again demonstrate the trade-off between efficiency and accuracy. If one cares more about accuracy, then allowing LSTM-Jump to read and jump more 6http://ai.Stanford.edu/amaas/data/ sentiment/index.html 1884 Model (R, N) Accuracy Time Speedup LSTM-Jump (80, 8) 0.894 769s 1.62x (80, 3) 0.892 764s 1.63x (70, 3) 0.889 673s 1.85x (50, 2) 0.887 585s 2.12x (100, 1) 0.880 489s 2.54x LSTM n/a 0.891 1243s 1x Table 5: Testing time and accuracy on the IMDB sentiment analysis dataset. The maximum size of jumping K is set to 40 for all the settings. The jumping level is word. times is a good choice. Otherwise, shrinking either one would bring a significant speedup though at the price of losing some accuracy. Nevertheless, the configuration with the highest accuracy still enjoys a 1.6x speedup compared to LSTM. With a slight loss of accuracy, LSTM-Jump can be 2.5x faster . 3.3 Character Level News Article Classification with AG dataset We now present results on testing the character level jumping with a news article classification problem. The dataset contains four classes of topics (World, Sports, Business, Sci/Tech) from the AG’s news corpus,7 a collection of more than 1 million news articles. The data we use is the subset constructed by Zhang et al. (2015) for classification with character-level convolutional networks. There are 30,000 training and 1,900 testing examples for each class respectively, where 15% of training data is set aside as validation. The nonspace alphabet under use are: abcdefghijklmnopqrstuvwxyz0123456 789-,;.!?:/\|_@#$%&*˜‘+-=<>()[]{} Since the vocabulary size is small, we choose 16 as the embedding size. The initialized entries of the embedding matrix are drawn from a uniform distribution in [−0.25, 0.25], which are progressively updated during training. Both LSTM-Jump and LSTM have 1 layer and 64 hidden units and the batch sizes are 20 and 100 respectively. The training sequence is again of length 400 that it is either padded from a short sequence or sampled from a long one. During training, we set R = 30, K = 40 and N = 5. The result is summarized in Table 6. It is interesting to see that even with skipping, LSTM-Jump 7http://www.di.unipi.it/˜gulli/AG_ corpus_of_news_articles.html is not always faster than LSTM. This is mainly due to the fact that the embedding size and hidden layer are both much smaller than those used previously, and accordingly the processing of a token is much faster. In that case, other computation overhead such as calculating and sampling from the jump softmax might become a dominating factor of efficiency. By this cross-task comparison, we can see that the larger the hidden unit size of recurrent neural network and the embedding are, the more speedup LSTM-Jump can gain, which is also confirmed by the task below. Model (R, N) Accuracy Time Speedup LSTM-Jump (50, 5) 0.854 102s 0.80x (40, 6) 0.874 98.1s 0.83x (40, 5) 0.889 83.0s 0.98x (30, 5) 0.885 63.6s 1.28x (30, 6) 0.893 74.2s 1.10x LSTM n/a 0.881 81.7s 1x Table 6: Testing time and accuracy on the AG news classification dataset. The maximum size of jumping K is set to 40 for all the settings. The jumping level is character. 3.4 Sentence Level Automatic Question Answering with Children’s Book Test dataset The last task is automatic question answering, in which we aim to test the sentence level skimming of LSTM-Jump. We benchmark on the data set Children’s Book Test (CBT) (Hill et al., 2015).8 In each document, there are 20 contiguous sentences (context) extracted from a children’s book followed by a query sentence. A word of the query is deleted and the task is to select the best fit for this position from 10 candidates. Originally, there are four types of tasks according to the part of speech of the missing word, from which, we choose the most difficult two, i.e., the name entity (NE) and common noun (CN) as our focus, since simple language models can already achieve human-level performance for the other two types . The models, LSTM or LSTM-Jump, firstly read the whole query, then the context sentences and finally output the predicted word. While LSTM reads everything, our jumping model would decide how many context sentences should skip after reading one sentence. Whenever a model finishes reading, the context and query are encoded in its 8http://www.thespermwhale.com/ jaseweston/babi/CBTest.tgz 1885 hidden state ho, and the best answer from the candidate words has the same index that maximizes the following: softmax(CWho) ∈R10, where C ∈R10×d is the word embedding matrix of the 10 candidates and W ∈Rd×hidden size is a trainable weight variable. Using such bilinear form to select answer basically follows the idea of Chen et al. (2016), as it is shown to have good performance. The task is now distilled to a classification problem of 10 classes. We either truncate or pad each context sentence, such that they all have length 20. The same preprocessing is applied to the query sentences except that the length is set as 30. For both models, the number of layers is 2, the number of hidden units is 256 and the batch size is 32. Pretrained word2vec embeddings are again used and they are not adjusted during training. The maximum number of context sentences LSTM-Jump can skip per time is K = 5 while the number of total jumping is limited to N = 5. We let the model jump after reading every sentence, so R = 1 (20 words). The result is reported in Table 7. The performance of LSTM-Jump is superior to LSTM in terms of both accuracy and efficiency under all settings in our experiments. In particular, the fastest LSTM-Jump configuration achieves a remarkable 6x speedup over LSTM, while also having respectively 1.4% and 4.4% higher accuracy in Children’s Book Test - Named Entity and Children’s Book Test - Common Noun. Model (R, N) Accuracy Time Speedup Children’s Book Test - Named Entity LSTM-Jump (1, 5) 0.468 40.9s 3.04x (1, 3) 0.464 30.3s 4.11x (1, 1) 0.452 19.9s 6.26x LSTM n/a 0.438 124.5s 1x Children’s Book Test - Common Noun LSTM-Jump (1, 5) 0.493 39.3s 3.09x (1, 3) 0.487 29.7s 4.09x (1, 1) 0.497 19.8s 6.14x LSTM n/a 0.453 121.5s 1x Table 7: Testing time and accuracy on the Children’s Book Test dataset. The maximum size of jumping K is set to 5 for all the settings. The jumping level is sentence. The dominant performance of LSTM-Jump over LSTM might be interpreted as follows. After reading the query, both LSTM and LSTM-Jump know what the question is. However, LSTM still has to process the remaining 20 sentences and thus at the very end of the last sentence, the long dependency between the question and output might become weak that the prediction is hampered. On the contrary, the question can guide LSTM-Jump on how to read selectively and stop early when the answer is clear. Therefore, when it comes to the output stage, the “memory” is both fresh and uncluttered that a more accurate answer is likely to be picked. In the following, we show two examples of how the model reads the context given a query (bold face sentences are those read by our model in the increasing order). XXXXX is the missing word we want to fill. Note that due to truncation, a few sentences might look uncompleted. Example 1 In the first example, the exact answer appears in the context multiple times, which makes the task relatively easy, as long as the reader has captured their occurrences. (a) Query: ‘XXXXX! (b) Context: 1. said Big Klaus, and he ran off at once to Little Klaus. 2. ‘Where did you get so much money from?’ 3. ‘Oh, that was from my horse-skin. 4. I sold it yesterday evening.’ 5. ‘That ’s certainly a good price!’ 6. said Big Klaus; and running home in great haste, he took an axe, knocked all his four 7. ‘Skins! 8. skins! 9. Who will buy skins?’ 10. he cried through the streets. 11. All the shoemakers and tanners came running to ask him what he wanted for them.’ 12. A bushel of money for each,’ said Big Klaus. 13. ‘Are you mad?’ 14. they all exclaimed. 15. ‘Do you think we have money by the bushel?’ 16. ‘Skins! 17. skins! 18. Who will buy skins?’ 19. he cried again, and to all who asked him what they cost, he answered,’ A bushel 20. ‘He is making game of us,’ they said; and the shoemakers seized their yard measures and (c) Candidates: Klaus | Skins | game | haste | head | home | horses | money | price| streets 1886 (d) Answer: Skins The reading behavior might be interpreted as follows. The model tries to search for clues, and after reading sentence 8, it realizes that the most plausible answer is “Klaus” or “Skins”, as they both appear twice. “Skins” is more likely to be the answer as it is followed by a “!”. The model searches further to see if ”Klaus!” is mentioned somewhere, but it only finds “Klaus” without “!” for the third time. After the last attempt at sentence 14, it is confident about the answer and stops to output with “Skins”. Example 2 In this example, the answer is illustrated by a word “nuisance” that does not show up in the context at all. Hence, to answer the query, the model has to understand the meaning of both the query and context and locate the synonym of “nuisance”, which is not merely verbatim and thus much harder than the previous example. Nevertheless, our model is still able to make a right choice while reading much fewer sentences. (a) Query: Yes, I call XXXXX a nuisance. (b) Context: 1. But to you and me it would have looked just as it did to Cousin Myra – a very discontented 2. “I’m awfully glad to see you, Cousin Myra, ”explained Frank carefully, “and your 3. But Christmas is just a bore – a regular bore.” 4. That was what Uncle Edgar called things that didn’t interest him, so that Frank felt pretty sure of 5. Nevertheless, he wondered uncomfortably what made Cousin Myra smile so queerly. 6. “Why, how dreadful!” 7. she said brightly. 8. “I thought all boys and girls looked upon Christmas as the very best time in the year.” 9. “We don’t, ”said Frank gloomily. 10. “It’s just the same old thing year in and year out. 11. We know just exactly what is going to happen. 12. We even know pretty well what presents we are going to get. 13. And Christmas Day itself is always the same. 14. We’ll get up in the morning , and our stockings will be full of things, and half of 15. Then there ’s dinner. 16. It ’s always so poky. 17. And all the uncles and aunts come to dinner – just the same old crowd, every year, and 18. Aunt Desda always says, ‘Why, Frankie, how you have grown!’ 19. She knows I hate to be called Frankie. 20. And after dinner they’ll sit round and talk the rest of the day, and that’s all. (c) Candidates: Christmas | boys | day | dinner | half | interest | rest | stockings | things | uncles (d) Answer: Christmas The reading behavior can be interpreted as follows. After reading the query, our model realizes that the answer should be something like a nuisance. Then it starts to process the text. Once it hits sentence 3, it may begin to consider “Christmas” as the answer, since “bore” is a synonym of “nuisance”. Yet the model is not 100% sure, so it continues to read, very conservatively – it does not jump for the next three sentences. After that, the model gains more confidence on the answer “Christmas” and it makes a large jump to see if there is something that can turn over the current hypothesis. It turns out that the last-read sentence is still talking about Christmas with a negative voice. Therefore, the model stops to take “Christmas” as the output. 4 Related Work Closely related to our work is the idea of learning visual attention with neural networks (Mnih et al., 2014; Ba et al., 2014; Sermanet et al., 2014), where a recurrent model is used to combine visual evidence at multiple fixations processed by a convolutional neural network. Similar to our approach, the model is trained end-to-end using the REINFORCE algorithm (Williams, 1992). However, a major difference between those work and ours is that we have to sample from discrete jumping distribution, while they can sample from continuous distribution such as Gaussian. The difference is mainly due to the inborn characteristics of text and image. In fact, as pointed out by Mnih et al. (2014), it was difficult to learn policies over more than 25 possible discrete locations. This idea has recently been explored in the context of natural language processing applications, where the main goal is to filter irrelevant content using a small network (Choi et al., 2016). Perhaps the most closely related to our work is the concurrent work on learning to reason with reinforcement 1887 learning (Shen et al., 2016). The key difference between our work and Shen et al. (2016) is that they focus on early stopping after multiple pass of data to ensure accuracy whereas our method focuses on selective reading with single pass to enable fast processing. The concept of “hard” attention has also been used successfully in the context of making neural network predictions more interpretable (Lei et al., 2016). The key difference between our work and Lei et al. (2016)’s method is that our method optimizes for faster inference, and is more dynamic in its jumping. Likewise is the difference between our approach and the “soft” attention approach by (Bahdanau et al., 2014). Our method belongs to adaptive computation of neural networks, whose idea is recently explored by (Graves, 2016; Jernite et al., 2016), where different amount of computations are allocated dynamically per time step. The main difference between our method and Graves; Jernite et al.’s methods is that our method can set the amount of computation to be exactly zero for many steps, thereby achieving faster scanning over texts. Even though our method requires policy gradient methods to train, which is a disadvantage compared to (Graves, 2016; Jernite et al., 2016), we do not find training with policy gradient methods problematic in our experiments. At the high-level, our model can be viewed as a simplified trainable Turing machine, where the controller can move on the input tape. It is therefore related to the prior work on Neural Turing Machines (Graves et al., 2014) and especially its RL version (Zaremba and Sutskever, 2015). Compared to (Zaremba and Sutskever, 2015), the output tape in our method is more simple and reward signals in our problems are less sparse, which explains why our model is easy to train. It is worth noting that Zaremba and Sutskever report difficulty in using policy gradients to train their model. Our method, by skipping irrelevant content, shortens the length of recurrent networks, thereby addressing the vanishing or exploding gradients in them (Hochreiter et al., 2001). The baseline method itself, Long Short Term Memory (Hochreiter and Schmidhuber, 1997), belongs to the same category of methods. In this category, there are several recent methods that try to achieve the same goal, such as having recurrent networks that operate in different frequency (Koutnik et al., 2014) or is organized in a hierarchical fashion (Chan et al., 2015; Chung et al., 2016). Lastly, we should point out that we are among the recent efforts that deploy reinforcement learning to the field of natural language processing, some of which have achieved encouraging results in the realm of such as neural symbolic machine (Liang et al., 2017), machine reasoning (Shen et al., 2016) and sequence generation (Ranzato et al., 2015). 5 Conclusions In this paper, we focus on learning how to skim text for fast reading. In particular, we propose a “jumping” model that after reading every few tokens, it decides how many tokens should be skipped by sampling from a softmax. Such jumping behavior is modeled as a discrete decision making process, which can be trained by reinforcement learning algorithm such as REINFORCE. In four different tasks with six datasets (one synthetic and five real), we test the efficiency of the proposed method on various levels of text jumping, from character to word and then to sentence. The results indicate our model is several times faster than, while the accuracy is on par with the baseline LSTM model. Acknowledgments The authors would like to thank the Google Brain Team, especially Zhifeng Chen and Yuan Yu for helpful discussion about the implementation of this model on Tensorflow. The first author also wants to thank Chen Liang, Hanxiao Liu, Yingtao Tian, Fish Tung, Chiyuan Zhang and Yu Zhang for their help during the project. Finally, the authors appreciate the invaluable feedback from anonymous reviewers. References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. arXiv preprint arXiv:1603.06042 . Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. 2014. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755 . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly 1888 learning to align and translate. arXiv preprint arXiv:1409.0473 . William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. 2015. Listen, attend and spell. arXiv preprint arXiv:1508.01211 . Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Eunsol Choi, Daniel Hewlett, Alexandre Lacoste, Illia Polosukhin, Jakob Uszkoreit, and Jonathan Berant. 2016. Hierarchical question answering for long documents. arXiv preprint arXiv:1611.01839 . Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2016. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704 . Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. Andrew M. Dai and Quoc V. Le. 2015. Semisupervised sequence learning. In Advances in Neural Information Processing Systems. pages 3079– 3087. Alex Graves. 2016. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983 . Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401 . Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693– 1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv:1511.02301 . Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and J¨urgen Schmidhuber. 2001. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In S. C. Kremer and J. F. Kolen, editors, A Field Guide to Dynamical Recurrent Neural Networks, IEEE press. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Yacine Jernite, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Variable computation in recurrent neural networks. arXiv preprint arXiv:1611.06188 . Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. 2014. A clockwork rnn. In International Conference on Machine Learning. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International Conference on Machine Learning (ICML). Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Dipanjan Das. 2016. Learning recurrent span representations for extractive question answering. arXiv preprint arXiv:1611.01436 . Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. arXiv preprint arXiv:1606.04155 . Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017: Long Papers. Andrew L Maas, Raymond E Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 142–150. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. 2014. Recurrent models of visual attention. In Advances in neural information processing systems. pages 2204–2212. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Conference on Computational Natural Language Learning (CoNLL). 1889 Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 115–124. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. CoRR abs/1511.06732. http://arxiv.org/abs/1511.06732. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Empirical Methods in Natural Language Processing (EMNLP). Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. In Annual Meeting of the Association for Computational Linguistics (ACL). Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 . Pierre Sermanet, Andrea Frome, and Esteban Real. 2014. Attention for fine-grained categorization. arXiv preprint arXiv:1412.7054 . Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Annual Meeting of the Association for Computational Linguistics (ACL). Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2016. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284 . Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the conference on empirical methods in natural language processing. Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, Christopher Potts, et al. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714 . Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, and Kaheer Suleman. 2016. A parallel-hierarchical model for machine comprehension on sparse data. arXiv preprint arXiv:1603.08884 . Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869 . Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905 . Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211 . Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 . Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning 8:229–256. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 . Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . Wojciech Zaremba and Ilya Sutskever. 2015. Reinforcement learning neural turing machines-revised. arXiv preprint arXiv:1505.00521 . Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems. pages 649–657. 1890
2017
172
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1891–1900 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1173 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1891–1900 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1173 An Algebra for Feature Extraction Vivek Srikumar School of Computing University of Utah [email protected] Abstract Though feature extraction is a necessary first step in statistical NLP, it is often seen as a mere preprocessing step. Yet, it can dominate computation time, both during training, and especially at deployment. In this paper, we formalize feature extraction from an algebraic perspective. Our formalization allows us to define a message passing algorithm that can restructure feature templates to be more computationally efficient. We show via experiments on text chunking and relation extraction that this restructuring does indeed speed up feature extraction in practice by reducing redundant computation. 1 Introduction Often, the first step in building statistical NLP models involves feature extraction. It is well understood that the right choice of features can substantially improve classifier performance. However, from the computational point of view, the process of feature extraction is typically treated, at best as the preprocessing step of caching featurized inputs over entire datasets, and at worst, as ‘somebody else’s problem’. While such approaches work for training, when trained models are deployed, the computational cost of feature extraction cannot be ignored. In this paper, we present the first (to our knowledge) algebraic characterization of the process of feature extraction. We formalize feature extractors as arbitrary functions that map objects (words, sentences, etc) to a vector space and show that this set forms a commutative semiring with respect to feature addition and feature conjunction. An immediate consequence of the semiring characterization is a computational one. Every semiring admits the Generalized Distributive Law (GDL) Algorithm (Aji and McEliece, 2000) that exploits the distributive property to provide computational speedups. Perhaps the most common manifestation of this algorithm in NLP is in the form of inference algorithms for factor graphs and Bayesian networks like the max-product, maxsum and sum-product algorithms (e.g. Goodman, 1999; Kschischang et al., 2001). When applied to feature extractors, the GDL algorithm can refactor a feature extractor into a faster one by reducing redundant computation. In this paper, we propose a junction tree construction to allow such refactoring. Since the refactoring is done at the feature template level, the actual computational savings grow as classifiers encounter more examples. We demonstrate the practical utility of our approach by factorizing existing feature sets for text chunking and relation extraction. We show that, by reducing the number of operations performed, we can obtain significant savings in the time taken to extract features. To summarize, the main contribution of this paper is the recognition that feature extractors form a commutative semiring over addition and conjunction. We demonstrate a practical consequence of this characterization in the form of a mechanism for automatically refactoring any feature extractor into a faster one. Finally, we show the empirical usefulness of our approach on relation extraction and text chunking tasks. 2 Problem Definition Before formal definitions, let us first see a running example. 2.1 Motivating Example Consider the frequently used unigram, bigram and trigram features. Each of these is a template that specifies a feature representation for a word. In 1891 fact, the bigram and trigram templates themselves are compositional by definition. A bigram is simply the conjunction of a word w and previous word, which we will denote as w-1; i.e., bigram = w-1&w. Similarity, a trigram is the conjunction of w-2 and bigram. These templates are a function that operate on inputs. Given a sentence, say John ate alone, and a target word, say alone, they will produce indicators for the strings w=alone, w-1=ate&w=alone and w-2=John&w-1=ate&w=alone respectively. Equivalently, each template maps an input to a vector. Here, the three vectors will be basis vectors associated with the feature strings. Observe that the function that extracts the target word (i.e., w) has to be executed in all three feature templates. Similarly, w-1 has to be extracted to compute both the bigrams and the trigrams. Can we optimize feature computation by automatically detecting such repetitions? 2.2 Definitions and Preliminaries Let X be a set of inputs to a classification problem at hand; e.g., X could be words, sentences, etc. Let V be a possibly infinite dimensional vector space that represents the feature space. Feature extractors are functions that map the input space X to the feature space V to produce feature vectors for inputs. Let F represent the set of feature functions, defined as the set {f : X →V}. We will use the typewriter font to denote feature functions like w and bigram. To round up the definitions, we will name two special feature extractors in F. The feature extractor 0 maps all inputs to the zero vector. The feature extractor 1 maps all inputs to a bias feature vector. Without loss of generality, we will designate the basis vector i0 ∈V as the bias feature vector. In this paper, we are concerned about two generally well understood operators on feature functions – addition and conjunction. However, let us see formal definitions for completeness. Feature Addition. Given two feature extractors f1, f2 ∈F, feature addition (denoted by +) produces a feature extractor f1 + f2 that adds up the images of f1 and f2. That is, for any example x ∈X, we have (f1 + f2) (x) = f1 (x) + f2 (x) (1) For example, the feature extractor w + w-1 will map the word alone to a vector that is one for the basis elements w=alone and w-1=went. This vector is the sum of the indicator vectors produced by the two operands w and w-1. Feature Conjunction. Given two feature extractors f1, f2 ∈F, their conjunction (denoted by &) can be interpreted as an extension of Boolean conjunction. Indicator features like bigram are predicates for certain observations. Conjoining indicator features for two predicates is equivalent to an indicator feature for the Boolean conjunction of the predicates. More generally, with feature extractors that produce real valued vectors, the conjunction will produce their tensor product. The equivalence of feature conjunctions to tensor products has been explored and exploited in recent literature for various NLP tasks (Lei et al., 2014; Srikumar and Manning, 2014; Gormley et al., 2015; Lei et al., 2015). We can further generalize this with an additional observation that is crucial for the rest of this paper. We argue that the conjunction operator produces symmetric tensor products rather than general tensor products. To see why, consider the bigram example. Though we defined the bigram feature as the conjunction of w-1 and w, their ordering is irrelevant from classification perspective – the eventual goal is to associate weights with this combination of features. This observation allows us to formally define the conjunction operator as: (f1&f2) (x) = vec (f1 (x) ⊙f2 (x)) (2) Here, vec (·) stands for vectorize, which simply converts the resulting tensor into a vector and ⊙ denotes the symmetric tensor product, introduced by Ryan (1980, Proposition 1.1). A symmetric tensor product is defined to be the average of the tensor products of all possible permutations of the operands, and thus, unlike a simple tensor product, is invariant to permutation of is operands. Informally, if we think of a tensor as a mapping from an ordered sequence of keys to real numbers, then, symmetric tensor product can be thought of as a mapping from a set of keys to numbers. 3 An Algebra for Feature Extraction In this section, we will see that the set of feature extractors F form a commutative semiring with respect to addition and conjunction. First, let us revisit the definition of a commutative semiring. Definition 1. A commutative semiring is an algebraic structure consisting of a set K and two bi1892 nary operations ⊕and ⊗(addition and multiplication respectively) such that: S1. (K, ⊕) is a commutative monoid: ⊕is associative and commutative, and the set K contains a unique additive identity 0 such that ∀x ∈K, we have 0 ⊕x = x ⊕0 = x. S2. (K, ⊗) is a commutative monoid: ⊗is associative and commutative, and the set K contains a unique multiplicative identity 1 such that ∀x ∈K, we have 1 ⊗x = x ⊗1 = x. S3. Multiplication distributes over addition on both sides. That is, for any x, y, z ∈K, we have x ⊗(y ⊕z) = (x ⊗y) ⊕(x ⊗z) and (x ⊕y) ⊗z = (x ⊗z) ⊕(y ⊗z). S4. The additive identity is an annihilating element with respect to multiplication. That is, for any x ∈K, we have x ⊗0 = 0 = 0 ⊗x. We refer the reader to Golan (2013) for a broadranging survey of semiring theory. We can now state and prove the main result of this paper. Theorem 1. Let X be any set and let F denote the set of feature extractors defined on the set. Then, (F, +, &) is a commutative semiring. Proof. We will show that the properties of a commutative semiring hold for (F, +, &) using the definitions of the operators from §2.2. Let f1, f2 and f ∈F be feature extractors. S1. For any example x ∈ X, we have (f1 + f2) (x) = f1 (x) + f2 (y). The right hand side denotes vector addition, which is associative and commutative. The 0 feature extractor is the additive identify because it produces the zero vector for any input. Thus, (F, +) is a commutative monoid. S2. To show that the conjunction operator is associative over feature extractors, it suffices to observe that the tensor product (and hence the symmetric tensor product) is associative. Furthermore, the symmetric tensor product is commutative by definition, because it is invariant to permutation of its operands. Finally, the bias feature extractor, 1, that maps all inputs to the bias vector i0, is the multiplicative identity. To see this, consider the conjunction f&1, applied to an input x: (f&1) (x) = vec (f (x) ⊙1 (x)) = vec (f (x) ⊙i0) The product term within the vec (·) in the final expression is a symmetric tensor, defined by basis vectors that are sets of the form {i0, i0}, {i1, i0}, · · · . Each basis {ij, i0} is associated with a feature value f (x)j. Thus, the vectorized form of this tensor will contain the same elements as f (x), perhaps mapped to different bases. The mapping from f (x) to the final vector is independent of the input x because the bias feature extractor is independent of x. Without loss of generality, we can fix this mapping to be the identity mapping, thereby rendering the final vectorized form equal to f (x). That is, f&1 = f. Thus, (F, &) is a commutative monoid. S3. Since tensor products distribute over addition, we get the distributive property. S4. By definition, conjoining with the 0 feature extractor annihilates all feature functions because 0 maps all inputs to the zero vector. ■ 4 From Algebra to an Algorithm The fact that feature extractors form a commutative semiring has a computational consequence. The generalized distributive law (GDL) algorithm (Aji and McEliece, 2000) exploits the properties of a commutative semiring to potentially reduce the computational effort for marginalizing sums of products. The GDL algorithm manifests itself as the Viterbi, Baum-Welch, Floyd-Warshall and belief propagation algorithms, and the Fast Fourier and Hadamard transforms. Each corresponds to a different commutative semiring and a specific associated marginalization problem. Here, we briefly describe the general marginalization problem from Aji and McEliece (2000) to introduce notation and also highlight the analogies to inference in factor graphs. Let x1, x2, · · · , xn denote a collection of variables that can take values from finite sets A1, A2, · · · , An respectively. Let boldface x denote the entire set of variables. These variables are akin to inference variables in factor graphs that may be assigned values or marginalized away. Let (K, ⊕, ⊗) denote a commutative semiring. Suppose αi is a function that maps a subset of the variables {xi1, xi2, · · · } to the set K. The subset of variables that constitute the domain of αi is called the local domain of the corresponding local function. Local domains and local functions are analogous to factors and factor potentials in a factor graph. With a collection of local domains, each associated with a function αi, the “marginalize the 1893 product” problem is that of computing: X x Y i αi (xi1, xi2, · · · ) (3) Here, the sum and product use the semiring operators. The summation is over all possible valid assignments of the variables x over the cross product of the sets A1, A2, · · · , An. This problem generalizes the familiar max-product or sum-product settings. Indeed, the GDL algorithm is a generalization of the message passing (Pearl, 2014) for efficiently computing marginals. To make feature extraction efficient using the GDL algorithm, in the next section, we will define a marginalization problem in terms of the semiring operators by specifying the variables involved, the local domains and local functions. Instead of describing the algorithm in the general setting, we will instantiate it on the semiring at hand. 5 Marginalizing Feature Extractors First, let us see why we can expect any benefit from the GDL algorithm by revisiting our running example (unigrams, bigrams and trigrams), written below using the semiring operations: f = w + (w-1&w) + (w-2&w-1&w) (4) When applied to a token, f performs two additions and three conjunctions. However, by applying the distributive property, we can refactor it as follows to reduce the number of operations: f′ = (1 + (1 + w-2) &w-1) &w (5) The refactored version f′ – equivalent to the original one – only performs two additions and two conjunctions, offering a computational saving of one operation. This refactoring is done at the level of feature templates (i.e., feature extractors); the actual savings are realized when the feature vectors are computed by applying this feature function to an input. Thus, the simplification, though seemingly modest at the template level, can lead to a substantial speed improvements when the features vectors are actually manifested from data. The GDL algorithm instantiated with the feature extractor semiring, automates such factorization at a symbolic level. In the rest of this section, first (§5.1), we will write our problem as a marginalization problem, as in Equation (3). Then (§5.2), we will construct a junction tree to apply the message passing algorithm. 5.1 Canonicalizing Feature Extractors To frame feature simplification as marginalization, we need to first write any feature extractor as a canonical sum of products that is amenable for factorization (i.e., as in (3)). To do so, in this section, we will define: (a) the variables involved, (b) the local domains (i.e., subsets of variables contributing to each product term), and, (c) a local function for each local domain (i.e., the αi’s). Variables. First, we write a feature extractor as a sum of products. Our running example (4) is already one. If we had an expression like f1& (f2 + f3), we can expand it into f1&f2 + f1&f3. From the sum of products, we identify the base feature extractors (i.e., ones not composed of other feature extractors) and define a variable xi for each. In our example, we have w, w-1 and w-2. Next, recall from §4 that each variable xi can take values from a finite set Ai. If a base feature extractor fi corresponds to the variable xi, then, we define xi’s domain to be the set Ai = {1, fi}. That is, each variable can either be the bias feature extractor or the feature extractor associated with it. Our example gives three variables x1, x2, x3 with domains A1 = {1, w}, A2 = {1, w-1}, A3 = {1, w-2} respectively. Local domains. Local domains are subsets of the variables defined above. They are the domains of functions that constitute products in the canonical form of a feature extractor. We define the following local domains, each illustrated with the corresponding instantiation in our running example: 1. A singleton set for each variable: {x1}, {x2}, and {x3}. 2. One local domain consisting of all the variables: The set {x1, x2, x3}. 3. One local domain consisting of no variables: The empty set {}. 4. One local domain for each subset of base feature extractors that participate in at least two conjunctions in the sum-of-products (i. e., the ones that can be factored away): Only {x1, x2} in our example, because only w and w-1 participate in two conjunctions in (4). Local functions. Each local domain is associated with a function that maps variable assignments to feature extractors. These functions (called local kernels by Aji and McEliece (2000)) are like potential functions in a factor graph. We define two kinds of local functions, driven by the goal of de1894 signing a marginalization problem that pushes towards simpler feature functions. 1. We associate the identity function with all singleton local domains, and the constant function that returns the bias 1 with the empty domain {}. 2. With all other local domains, we associate an indicator function, denoted by z. For a local domain, z is an indicator for those assignments of the variables involved, whose conjunctions are present in any product term in sum-of-products. In our running example, the function z(x1, x2) is the indicator for (x1, x2) belonging to the set {(w, 1) , (w, w-1)}, represented by the table: x1 x2 z(x1, x2) 1 1 0 1 w-1 0 w 1 1 w w-1 1 The indicator returns the semiring’s multiplicative and additive identities. The value of z above for inputs (w, 1) is 1 because the first term in (4) that defines the feature extractor contains w, but not w-1. On the other hand, the input (1, 1) is mapped to 0 because every product term contains either w or w-1. For the local domain {x1, x2, x3}, the local function is the indicator for the set {(w, 1, 1), (w, w-1, 1), (w, w-1, w-2)}, corresponding to each product term. In summary, for the running example we have: Local domain Local function {x1} x1 {x2} x2 {x3} x3 {x1, x2, x3} z(x1, x2, x3) {} 1 {x1, x2} z(x1, x2) The procedure described here aims to convert any feature function into a canonical form that can be factorized using the GDL algorithm. Indeed, using local domains and functions specified above, any feature extractor can we written as a canonical sum of products as in (3). For example, using the table above, our running example is identical to X x1,x2,x3 z(x1, x2, x3)&z(x1, x2)&x1&x2&x3 (6) Here, the summation is over the cross product of the Ai’s. The choice of the z functions ensures that only those conjunctions that were in the original feature extractor remain. This section shows one approach for canonicalization; the local domains and functions are a design choice that may be optimized in future work. We should also point out that, while this process is notationally tedious, its actual computational cost is negligible, especially given that it is to be performed only once at the template level. 5.2 Simplifying feature extractors As mentioned in §4, a commutative semiring can allow us to employ the GDL algorithm to efficiently compute a sum of products. Starting from a canonical sum-of-products expression such as the one in (6), this process is similar to variable elimination for Bayesian networks. The junction tree algorithm is a general scheme to avoid redundant computation in such networks (Cowell, 2006). To formalize this, we will first build a junction tree and then define the messages sent from the leaves to the root. The final message at the root will give us the simplified feature function. Constructing a Junction Tree. First, we will construct a junction tree using the local domains from § 5.1. In any junction tree, the edges should satisfy the running intersection property: i.e., if a variable xi is in two nodes in the tree, then it should be in every node in the path connecting them. To build a junction tree, we will first create a graph whose nodes are the local domains. The edges of this graph connect pairs of nodes if the variables in one are a subset of the other. For simplicity, we will assume that our nodes are arranged in a lattice as shown in Figure 1, with edges connecting nodes in subsequent levels. For example, there is no edge connecting nodes B and C. Every spanning tree of this lattice is a junction tree. Which one should we consider? Let us examine the properties that we need. First, the root of the tree should correspond to the empty local domain {} because messages arriving at this node will accumulate all products. Second, as we will see, feature extractors farther from the root will appear in inner terms in the factorized form. That is, frequent or more expensive feature extractors should be incentivized to appear higher in the tree. To capture these preferences, we frame the task of constructing the junction tree as a maximum spanning tree problem over the graph, with edge weights incorporating the preferences. One natural weighting function is the computational expense of the base feature extractors associated with that edge. For example, the weight associated with the edge connecting nodes E and D in the fig1895 {} {x1} {x2} {x3} {x1, x2} {x1, x2, x3} A B C D E F Figure 1: The junction tree for our running example. The process of constructing the junction tree is described in the text. Here, we show both the tree and the graph from which it is constructed; dashed lines show edges are not in the tree. Filled circles denote the names of the nodes. The local domain {x1} is connected to the empty local domain because the feature w corresponding to it is most frequent. ure can be the average cost of the w and w-1 feature extractors. If computational costs are unavailable, we can use the number times a feature extractor appears in the expression to be simplified. Under this criterion, in our example, edges connecting E to its neighbors will be weighted highest. Once we have a spanning tree, we make the edges directed so that the empty set is the root. Figure 1 shows the junction tree obtained for our running example. Message Passing for Feature Simplification. Given the junction tree, we can use a standard message passing scheme for factorization. The goal is to collect information at each node in the tree from its children all the way to the root. Suppose vi, vj denote two nodes in the tree. Since nodes are associated with sets of variables, their intersections vi ∩vj and differences vi \ vj are defined. For example, in the example, A ∩B = {x3} and B \ D = {x3}. We will denote children of a node vi in the junction tree by C(vi). The message from any node vi to its parent vj is a function that maps the variables vi ∩vj to a feature extractor by marginalizing out all variables that are in vi but not in vj. Formally, we define the message µij from a node vi to a node vj as: µij (vi ∩vj) = X vj\vi αi (vi) Y vk∈C(vi) µki (vk ∩vi) . (7) Here, αi is the local function at node vi. To complete the formal definition of the algorithm, we note that by performing post-order traversal of the junction tree, we will accumulate all messages at the root of the tree, that corresponds to the empty set of variables. The incoming message at this node represents the factorized feature extractor. Algorithm 1 briefly summarizes the entire simplification process. The proof of correctness of the algorithm follows from the fact that the range of all the local functions is a commutative semiring, namely the feature extractor semiring. We refer the reader to (Aji and McEliece, 2000, Appendix A) for details. Algorithm 1 The Generalized Distributive Law Algorithm for simplifying a feature extractor f. See the text for details. 1: Convert f into a canonical sum of products representation (§ 5.1). 2: Construct a junction tree whose nodes are local domains. 3: for edge (vj, vi) in the post-order traversal of the tree do 4: Receive a message µij at vj using (7). 5: end for 6: return the incoming message at the root Example run of message propagation. As an illustration, let us apply it to our running example. 1. The first message is from A to B. Since A has no children and its local function is the identity function, we have µAB(x) = x. Similarly, we have µCD(x) = x. 2. The message from B to D has to marginalize out the variable x3. That is, we have µBD(x1, x2) = P x3 z(x1, x2, x3)µAB(x3). The summation is over the domain of x3, namely {1, w-2}. By substituting for z and µAB, and simplifying, we get the message: x1 x2 µBD(x1, x2) 1 1 0 1 w-1 0 w 1 1 w w-1 1 + w-2 3. The message from D to E marginalizes out the variable x2 to give us µDE(x1) = P x2 z(x1, x2)µCD(x2)µBD(x1, x2). Here, the summation is over the domain of x2, namely {1, w-1}. We can simplify the message as: x1 µDE(x1) 1 0 w 1 + (1 + w-2) &w-1 4. Finally, the message from E to the root F marginalizes out the variable x1 by summing over its domain {1, w} to give us the message (1 + (1 + w-2) &w-1) &w. The message received at the root is the factorized feature extractor. Note that the final form is identical to (5) at the beginning of §5. Discussion. An optimal refactoring algorithm would produce a feature extractor that is both correct and fastest. The algorithm above has the former guarantee. While it does reduce the number of operations performed, the closeness of the refac1896 tored feature function to the fastest one depends on the heuristic used to weight edges for identifying the junction tree. Changing the heuristic can change the junction tree, thus changing the final factorized function. We found via experiments that using the number of times a feature extractor occurs in the sum-of-products to weight edges is promising. A formal study of optimality of factorization is an avenue of future research. 6 Experiments We show the practical usefulness of feature function refactoring using text chunking and relation extraction. In both cases, the question we seek to evaluate empirically is: Does the feature function refactoring algorithm improve feature extraction time? We should point out that our goal is not to measure accuracy of prediction, but the efficiency of feature extraction. Indeed, we are guaranteed that refactoring will not change accuracy; factorized feature extractors produce the same feature vectors as the original ones. In all experiments, we compare a feature extractor and its refactored variant. For the factorization, we incentivized the junction tree to factor out base feature extractors that occurred most frequently in the feature extractor. For both tasks, we use existing feature representations that we briefly describe. We refer the reader to the original work that developed the feature representations for further details. For both the original and the factorized feature extractors, we report (a) the number of additions and conjunctions at the template level, and, (b) the time for feature extraction on the entire dataset. For the time measurements, we report average times for the original and factorized feature extractors over five paired runs to average out variations in system load.1 6.1 Text Chunking We use data from the CoNLL 2000 shared task (Tjong Kim Sang and Buchholz, 2000) of text chunking and the feature set described by Martins et al. (2011), consisting of the following templates extracted at each word: (1) Up to 3-grams of POS tags within a window of size ten centered at the word, (2) up to 3-grams of words, within a window of size six centered at the word, and (3) up to 2-grams of word shapes, within a window of size 1We performed all our experiments on a server with 128GB RAM and 24 CPU cores, each clocking at 2600 MHz. Size Average feature Setting + & extraction time (ms) Original 47 75 17776.6 Factorized 47 54 4294.2 Table 1: Comparison of the original and factorized feature extractors for the text chunking task. The time improvement is statistically significant using the paired t-test at p < 0.01. Size Average feature Setting + & extraction time (ms) Original 43 19 8173.0 Factorized 43 11 6276.4 Table 2: Comparison of the original and factorized feature extractors for the relation extraction task. We measured time using 3191 training mention pairs. The time improvement is statistically significant using the paired t-test at p < 0.01. four centered at the word. In all, there are 96 feature templates. We factorized the feature representation using Algorithm 1. Table 1 reports the number of operations (addition and conjunction) in the templates in the original and factorized versions of the feature extractor. The table also reports feature extraction time taken from the entire training set of 8,936 sentences, corresponding to 211,727 tokens. First, we see that the factorization reduces the number of feature conjunction operations. Thus, to produce exactly the same feature vector, the factorized feature extractor does less work. The time results show that this computational gain is not merely a theoretical one; it also manifests itself practically. 6.2 Relation Extraction Our second experiment is based on the task of relation extraction using the English section of the ACE 2005 corpus (Walker et al., 2006). The goal is to identify semantic relations between two entity mentions in text. We use the feature representation developed by Zhou et al. (2005) as part of an investigation of how various lexical, syntactic and semantic sources of information affect the relation extraction task. To this end, the feature set consists of word level information about mentions, their entity types, their relationships with chunks, path features from parse trees, and semantic features based on WordNet and various word lists. Given the complexity of the features, we do not describe them here and refer the reader to the original work for details. Note that compared to the chunking features, these features are more diverse in their computational costs. We report the results of our experiments in Ta1897 ble 2. As before, we see that the number of conjunction operations decreases after factorization. Curiously, however, despite the complexity of the feature set, the actual number of operations is smaller than text chunking. Due to this, we see a more modest, yet significant decrease in the time for feature extraction after factorization. 7 Related Work and Discussion Simplifying Expressions. The problem of simplifying expressions with an eye on computational efficiency is the focus of logic synthesis (cf. Hachtel and Somenzi, 2006), albeit largely geared towards analyzing and verifying digital circuits. Logic synthesis is NP-hard in general. In our case, the hardness is hidden in the fact that our approach does not guarantee that we will find the smallest (or most efficient) factorization. The junction tree construction determines the factorization quality. Semirings in NLP. Semirings abound in NLP, though primarily as devices to design efficient inference algorithms for various graphical models (e.g. Wainwright and Jordan, 2008; Sutton et al., 2012). Goodman (1999) synthesized various parsing algorithms in terms of semiring operations. Since then, we have seen several explorations of the interplay between weighted dynamic programs and semirings for inference in tasks such as parsing and machine translation (e. g. Eisner et al., 2005; Li and Eisner, 2009; Lopez, 2009; Gimpel and Smith, 2009). Allauzen et al. (2003) developed efficient algorithms for constructing statistical language models by exploiting the algebraic structure of the probability semiring. Feature Extraction and Modeling Languages. Much work around features in NLP is aimed at improving classifier accuracy. There is some work on developing languages to better construct feature spaces (Cumby and Roth, 2002; Broda et al., 2013; Sammons et al., 2016), but they do not formalize feature extraction from an algebraic perspective. We expect that the algorithm proposed in this paper can be integrated into such feature construction languages, and also into libraries geared towards designing feature rich models (e.g. McCallum et al., 2009; Chang et al., 2015). Representation vs. Speed. As the recent successes (Goodfellow et al., 2016) of distributed representations show, the representational capacity of a feature space is of primary importance. Indeed, several recent lines of work that use distributed representations have independently identified the connection between conjunctions (of features or factors in a factor graph) and tensor products (Lei et al., 2014; Srikumar and Manning, 2014; Gormley et al., 2015; Yu et al., 2015; Lei et al., 2015; Primadhanty et al., 2015). They typically impose sparsity or low-rank requirements to induce better representations for learning. In this paper, we use the connection between tensor products and conjunctions to prove algebraic properties of feature extractors, leading to speed improvements via factorization. In this context, we note that in both our experiments, the number of conjunctions are reduced by factorization. We argue that this is an important saving because conjunctions can be a more expensive operation. This is especially true when dealing with dense feature representations, as is increasingly common with word vectors and neural networks, because conjunctions of dense feature vectors are tensor products, which can be slow. Finally, while training classifiers can be time consuming, when trained classifiers are deployed, feature extraction will dominate computation time over the classifier’s lifetime. However, the prediction step includes both feature extraction and computing inner products between features and weights. Many features may be associated with zero weights because of sparsity-inducing learning (e.g. Andrew and Gao, 2007; Martins et al., 2011; Strubell et al., 2015). Since these two aspects are orthogonal to each other, the factorization algorithm presented in this paper can be used to speed up extraction of those features that have non-zero weights. 8 Conclusion In this paper, we studied the process of feature extraction using an algebraic lens. We showed that the set of feature extractors form a commutative semiring over addition and conjunction. We exploited this characterization to develop a factorization algorithm that simplifies feature extractors to be more computationally efficient. We demonstrated the practical value of the refactoring algorithm by speeding up feature extraction for text chunking and relation extraction tasks. Acknowledgments The author thanks the anonymous reviewers for their insightful comments and feedback. 1898 References Srinivas M Aji and Robert J McEliece. 2000. The generalized distributive law. IEEE Transactions on Information Theory 46(2). Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing statistical language models. In ACL. Galen Andrew and Jianfeng Gao. 2007. Scalable training of L1-regularized log-linear models. In ICML. Bartosz Broda, Paweł K˛edzia, Michał Marci´nczuk, Adam Radziszewski, Radosław Ramocki, and Adam Wardy´nski. 2013. Fextor: A feature extraction framework for natural language processing: A case study in word sense disambiguation, relation recognition and anaphora resolution. In Computational Linguistics, Springer, pages 41–62. Kai-Wei Chang, Shyam Upadhyay, Ming-Wei Chang, Vivek Srikumar, and Dan Roth. 2015. IllinoisSL: A JAVA library for Structured Prediction. arXiv preprint arXiv:1509.07179 . Robert G Cowell. 2006. Probabilistic networks and expert systems: Exact computational methods for Bayesian networks. Springer Science & Business Media. Chad M Cumby and Dan Roth. 2002. Learning with feature description logics. In Inductive logic programming, Springer. Jason Eisner, Eric Goldlust, and Noah A Smith. 2005. Compiling Comp Ling: Practical weighted dynamic programming and the Dyna language. In HLTEMNLP. Kevin Gimpel and Noah A Smith. 2009. Cube summing, approximate inference with non-local features, and dynamic programming without semirings. In EACL. Jonathan S Golan. 2013. Semirings and their Applications. Springer Science & Business Media. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. Joshua Goodman. 1999. Semiring parsing. Computational Linguistics 25(4):573–605. Matthew R. Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. In EMNLP. Gary D Hachtel and Fabio Somenzi. 2006. Logic synthesis and verification algorithms. Springer Science & Business Media. Frank R Kschischang, Brendan J Frey, and H-A Loeliger. 2001. Factor graphs and the sum-product algorithm. IEEE Transactions on information theory 47(2):498–519. Tao Lei, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In ACL. Tao Lei, Yuan Zhang, Lluís Màrquez, Alessandro Moschitti, and Regina Barzilay. 2015. High-order lowrank tensors for semantic role labeling. In NAACL. Zhifei Li and Jason Eisner. 2009. First- and secondorder expectation semirings with applications to minimum-risk training on translation forests. In EMNLP. Adam Lopez. 2009. Translation as weighted deduction. In EACL. André FT Martins, Noah A Smith, Pedro MQ Aguiar, and Mário AT Figueiredo. 2011. Structured sparsity in structured prediction. In CoNLL. Andrew McCallum, Karl Schultz, and Sameer Singh. 2009. Factorie: Probabilistic programming via imperatively defined factor graphs. In NIPS. Judea Pearl. 2014. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann. Audi Primadhanty, Xavier Carreras, and Ariadna Quattoni. 2015. Low-rank regularization for sparse conjunctive feature spaces: An application to named entity classification. In ACL. Raymond A Ryan. 1980. Applications of topological tensor products to infinite dimensional holomorphy. Ph.D. thesis, Trinity College. Mark Sammons, Christos Christodoulopoulos, Parisa Kordjamshidi, Daniel Khashabi, Vivek Srikumar, and Dan Roth. 2016. EDISON: Feature Extraction for NLP. In LREC. Vivek Srikumar and Christopher D. Manning. 2014. Learning distributed representations for structured output prediction. In NIPS. Emma Strubell, Luke Vilnis, Kate Silverstein, and Andrew McCallum. 2015. Learning Dynamic Feature Selection for Fast Sequential Prediction. In ACL. Charles Sutton, Andrew McCallum, et al. 2012. An introduction to conditional random fields. Foundations and Trends R⃝in Machine Learning 4(4):267– 373. Erik F Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In CoNLL. Martin J Wainwright and Michael I Jordan. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends R⃝in Machine Learning 1(1–2):1–305. 1899 Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia 57. Mo Yu, Matthew R. Gormley, and Mark Dredze. 2015. Combining Word Embeddings and Feature Embeddings for Fine-grained Relation Extraction. In NAACL. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In ACL. 1900
2017
173
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1901–1912 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1174 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1901–1912 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1174 Chunk-based Decoder for Neural Machine Translation Shonosuke Ishiwatar†∗ Jingtao Yao‡∗ Shujie Liu§ Mu Li§ Ming Zhou§ Naoki Yoshinaga¶ Masaru Kitsuregawa∥¶ Weijia Jia‡ † The University of Tokyo ‡ Shanghai Jiao Tong University § Microsoft Research Asia ¶ Institute of Industrial Science, the University of Tokyo ∥National Institute of Informatics †¶∥{ishiwatari, ynaga, kitsure}@tkl.iis.u-tokyo.ac.jp ‡{yjt1995@, jia-wj@cs.}sjtu.edu.cn §{shujliu, muli, mingzhou}@microsoft.com Abstract Chunks (or phrases) once played a pivotal role in machine translation. By using a chunk rather than a word as the basic translation unit, local (intrachunk) and global (inter-chunk) word orders and dependencies can be easily modeled. The chunk structure, despite its importance, has not been considered in the decoders used for neural machine translation (NMT). In this paper, we propose chunk-based decoders for NMT, each of which consists of a chunk-level decoder and a word-level decoder. The chunklevel decoder models global dependencies while the word-level decoder decides the local word order in a chunk. To output a target sentence, the chunk-level decoder generates a chunk representation containing global information, which the wordlevel decoder then uses as a basis to predict the words inside the chunk. Experimental results show that our proposed decoders can significantly improve translation performance in a WAT ’16 Englishto-Japanese translation task. 1 Introduction Neural machine translation (NMT) performs endto-end translation based on a simple encoderdecoder model (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014b) and has now overtaken the classical, complex statistical machine translation (SMT) in terms of performance and simplicity (Sennrich et al., 2016; Luong and Manning, 2016; Cromieres et al., 2016; Neubig, 2016). In NMT, an encoder first maps a source sequence into vector representations and ∗Contribution during internship at Microsoft Research. !"# $ %&"'()**'+ ,+**-+.(*(&/01(/2 3&# !" # $ %& ' ()* +* , -& + . 4((0 -+.( (&/05 ,+*6&78 Figure 1: Translation from English to Japanese. The function words are underlined. a decoder then maps the vectors into a target sequence (§ 2). This simple framework allows researchers to incorporate the structure of the source sentence as in SMT by leveraging various architectures as the encoder (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014b; Eriguchi et al., 2016b). Most of the NMT models, however, still rely on a sequential decoder based on a recurrent neural network (RNN) due to the difficulty in capturing the structure of a target sentence that is unseen during translation. With the sequential decoder, however, there are two problems to be solved. First, it is difficult to model long-distance dependencies (Bahdanau et al., 2015). A hidden state ht in an RNN is only conditioned by its previous output yt−1, previous hidden state ht−1, and current input xt. This makes it difficult to capture the dependencies between an older output yt−N if they are too far from the current output. This problem can become more serious when the target sequence becomes longer. For example, in Figure 1, when we translate the English sentence into the Japanese one, after the decoder predicts the content word “帰っ (go back)”, it has to predict four function words “て(suffix)”, “しまい(perfect tense)”, “たい(desire)”, and “と(to)” before predicting the next content word “思っ(feel)”. In such a case, the decoder is required to capture the longer dependencies in a target sentence. Another problem with the sequential decoder is that it is expected to cover multiple possible word orders simply by memorizing the local word se1901 quences in the limited training data. This problem can be more serious in free word-order languages such as Czech, German, Japanese, and Turkish. In the case of the example in Figure 1, the order of the phrase “早く(early)” and the phrase “家へ(to home)” is flexible. This means that simply memorizing the word order in training data is not enough to train a model that can assign a high probability to a correct sentence regardless of its word order. In the past, chunks (or phrases) were utilized to handle the above problems in statistical machine translation (SMT) (Watanabe et al., 2003; Koehn et al., 2003) and in example-based machine translation (EBMT) (Kim et al., 2010). By using a chunk rather than a word as the basic translation unit, one can treat a sentence as a shorter sequence. This makes it easy to capture the longer dependencies in a target sentence. The order of words in a chunk is relatively fixed while that in a sentence is much more flexible. Thus, modeling intra-chunk (local) word orders and inter-chunk (global) dependencies independently can help capture the difference of the flexibility between the word order and the chunk order in free word-order languages. In this paper, we refine the original RNN decoder to consider chunk information in NMT. We propose three novel NMT models that capture and utilize the chunk structure in the target language (§ 3). Our focus is the hierarchical structure of a sentence: each sentence consists of chunks, and each chunk consists of words. To encourage an NMT model to capture the hierarchical structure, we start from a hierarchical RNN that consists of a chunk-level decoder and a word-level decoder (Model 1). Then, we improve the word-level decoder by introducing inter-chunk connections to capture the interaction between chunks (Model 2). Finally, we introduce a feedback mechanism to the chunk-level decoder to enhance the memory capacity of previous outputs (Model 3). We evaluate the three models on the WAT ’16 English-to-Japanese translation task (§ 4). The experimental results show that our best model outperforms the best single NMT model reported in WAT ’16 (Eriguchi et al., 2016b). Our contributions are twofold: (1) chunk information is introduced into NMT to improve translation performance, and (2) a novel hierarchical decoder is devised to model the properties of chunk structure in the encoder-decoder framework. 2 Preliminaries: Attention-based Neural Machine Translation In this section, we briefly introduce the architecture of the attention-based NMT model (Bahdanau et al., 2015), which is the basis of our proposed models. 2.1 Neural Machine Translation An NMT model usually consists of two connected neural networks: an encoder and a decoder. After the encoder maps a source sentence into a fixed-length vector, the decoder maps the vector into a target sentence. The implementation of the encoder can be a convolutional neural network (CNN) (Kalchbrenner and Blunsom, 2013), a long short-term memory (LSTM) (Sutskever et al., 2014; Luong and Manning, 2016), a gated recurrent unit (GRU) (Cho et al., 2014b; Bahdanau et al., 2015), or a Tree-LSTM (Eriguchi et al., 2016b). While various architectures are leveraged as an encoder to capture the structural information in the source language, most of the NMT models rely on a standard sequential network such as LSTM or GRU as the decoder. Following (Bahdanau et al., 2015), we use GRU as the recurrent unit in this paper. A GRU unit computes its hidden state vector hi given an input vector xi and the previous hidden state hi−1: hi = GRU(hi−1, xi). (1) The function GRU(·) is calculated as ri = σ(Wrxi + Urhi−1 + br), (2) zi = σ(Wzxi + Uzhi−1 + bz), (3) ˜hi = tanh(W xi + U(ri ⊙hi−1 + b)), (4) hi = (1 −zi) ⊙˜hi + zi ⊙hi−1, (5) where vectors ri and zi are reset gate and update gate, respectively. While the former gate allows the model to forget the previous states, the latter gate decides how much the model updates its content. All the W s and Us, or the bs above are trainable matrices or vectors. σ(·) and ⊙denote the sigmoid function and element-wise multiplication operator, respectively. In this simple model, we train a GRU function that encodes a source sentence {x1, · · · , xI} into a single vector hI. At the same time, we jointly train another GRU function that decodes hI to the target sentence {y1, · · · , yJ}. Here, the j-th word in the 1902 !" だれ か が #" !"#$%&'( )*%%&"(+,-,&+ $% . / )&-'% &' . . $( $' . . )% )( )" )* % )* ( )*" Figure 2: Standard word-based decoder. target sentence yj can be predicted with this decoder GRU and a nonlinear function g(·) followed by a softmax layer, as c = hI, (6) sj = GRU(sj−1, [yj−1; c]), (7) ˜sj = g(yj−1, sj, c), (8) P(yj|y<j, x) = softmax(˜sj), (9) where c is a context vector of the encoded sentence and sj is a hidden state of the decoder GRU. Following Bahdanau et al. (2015), we use a mini-batch stochastic gradient descent (SGD) algorithm with ADADELTA (Zeiler, 2012) to train the above two GRU functions (i.e., the encoder and the decoder) jointly. The objective is to minimize the cross-entropy loss of the training data D, as J = X (x,y)∈D −log P(y|x). (10) 2.2 Attention Mechanism for Neural Machine Translation To use all the hidden states of the encoder and improve the translation performance of long sentences, Bahdanau et al. (2015) proposed using an attention mechanism. In the attention model, the context vector is not simply the last encoder state hI but rather the weighted sum of all hidden states of the bidirectional GRU, as follows: cj = I X i=1 αjihi. (11) Here, the weight αji decides how much a source word xi contributes to the target word yj. αji is computed by a feedforward layer and a softmax layer as eji = v · tanh(Wehi + Uesj + be), (12) αji = exp(eji) PJ j′=1 exp(ej′i) , (13) !"#$% !"&$'($% !"&)'*+), ($% !"#-'.+-% &$ &/ ! ! !"&/'*+/, (-% ! ! !"&0'*+0, ($% ! ! !"&$'(-% だれ か が Figure 3: Chunk-based decoder. The top layer (word-level decoder) illustrates the first term in Eq. (15) and the bottom layer (chunk-level decoder) denotes the second term. where We, Ue are trainable matrices and the v, be are trainable vectors.1 In a decoder using the attention mechanism, the obtained context vector cj in each time step replaces cs in Eqs. (7) and (8). An illustration of the NMT model with the attention mechanism is shown in Figure 2. The attention mechanism is expected to learn alignments between source and target words, and plays a similar role to the translation model in phrase-based SMT (Koehn et al., 2003). 3 Neural Machine Translation with Chunk-based Decoder Taking non-sequential information such as chunks (or phrases) structure into consideration has proved helpful for SMT (Watanabe et al., 2003; Koehn et al., 2003) and EBMT (Kim et al., 2010). Here, we focus on two important properties of chunks (Abney, 1991): (1) The word order in a chunk is almost always fixed, and (2) A chunk consists of a few (typically one) content words surrounded by zero or more function words. To fully utilize the above properties of a chunk, we propose modeling the intra-chunk and the inter-chunk dependencies independently with a “chunk-by-chunk” decoder (See Figure 3). In the standard word-by-word decoder described in § 2, a target word yj in the target sentence y is predicted by taking the previous outputs y<j and the source sentence x as input: P(y|x) = J Y j=1 P(yj|y<j, x), (14) where J is the length of the target sentence. Not 1We choose this implementation following (Luong et al., 2015b), while (Bahdanau et al., 2015) use sj−1 instead of sj in Eq. (12). 1903 !"#$ !"#$%&'('& )'*"$'#+ ,!-./.0+ 1 %"#& '() %"#* '() 23456++ 23456+, -+ 23456+, %"#$ '() %. "#$ '() だれ か が 犬 噛ま れ %. "#& '() %. "#* '() /"#$ %"0&# 1 '() /"0&#1234 5 23456%&'('& )'*"$'#+ ,!-././+ 1 7 7 7 !"#$%& '(&)*+$,-./0*1& ."**$2+3"*& 4§56'7 !"#$%& 5(&8",#-+"-./0*1& 9$$#:;21& 4§5657 %& '6) %" '6) %"0& '6) %"#$ '6) 7& 7 8 3'9#$ 89 7* 79 7 7 :5*"$'#+ 3;$$'5+<=9='< %"#* '6) 2345 1 Figure 4: Proposed model: NMT with chunk-based decoder. A chunk-level decoder generates a chunk representation for each chunk while a word-level decoder uses the representation to predict each word. The solid lines in the figure illustrate Model 1. The dashed blue arrows in the word-level decoder denote the connections added in Model 2. The dotted red arrows in the chunk-level decoder denote the feedback states added in Model 3; the connections in the thick black arrows are replaced with the dotted red arrows. assuming any structural information of the target language, the sequential decoder has to memorize long dependencies in a sequence. To release the model from the pressure of memorizing the long dependencies over a sentence, we redefine this problem as the combination of a word prediction problem and a chunk generation problem: P(y|x) = K Y k=1   P(ck|c<k, x) Jk Y j=1 P(yj|y<j, ck, x)   , (15) where K is the number of chunks in the target sentence and Jk is the length of the k-th chunk (see Figure 3). The first term represents the generation probability of a chunk ck and the second term indicates the probability of a word yj in the chunk. We model the former term as a chunk-level decoder and the latter term as a word-level decoder. As demonstrated later in § 4, both K and Jk are much shorter than the sentence length J, which is why our decoders do not have to capture the long dependencies like the standard decoder does. In the above formulation, we model the information of words and their orders in a chunk. No matter which language we target, we can assume that a chunk usually consists of some content words and function words, and the word order in the chunk is almost always fixed (Abney, 1991). Although our idea can be used in several languages, the optimal network architecture could depend on the word order of the target language. In this work, we design models for languages in which content words are followed by function words, such as Japanese and Korean. The details of our models are described in the following sections. 3.1 Model 1: Basic Chunk-based Decoder The model described in this section is the basis of our proposed decoders. It consists of two parts: a chunk-level decoder (§ 3.1.1) and a word-level decoder (§ 3.1.2). The part drawn in black solid lines in Figure 4 illustrates the architecture of Model 1. 3.1.1 Chunk-level Decoder Our chunk-level decoder (see Figure 3) outputs a chunk representation. The chunk representation contains the information about words that should be predicted by the word-level decoder. To generate the representation of the k-th chunk ˜s(c) k , the chunk-level decoder (see the bottom layer in Figure 4) takes the last states of the word-level decoder s(w) k−1,Jk−1 and updates its hidden state s(c) k as: s(c) k = GRU(s(c) k−1, s(w) k−1,Jk−1), (16) ˜s(c) k = Wcs(c) k + bc. (17) The obtained chunk representation ˜s(c) k continues to be fed into the word-level decoder until it outputs all the words in the current chunk. 3.1.2 Word-level Decoder Our word-level decoder (see Figure 4) differs from the standard sequential decoder described in § 2 in 1904 that it takes the chunk representation ˜s(c) k as input: s(w) k,j = GRU(s(w) k,j−1, [˜s(c) k ; yk,j−1; ck,j−1]), (18) ˜s(w) k,j = g(yk,j−1, s(w) k,j , ck,j), (19) P(yk,j|y<j, x) = softmax(˜s(w) k,j ). (20) In a standard sequential decoder, the hidden state iterates over the length of a target sentence and then generates an end-of-sentence token. In other words, its hidden layers are required to memorize the long-term dependencies and orders in the target language. In contrast, in our word-level decoder, the hidden state iterates only over the length of a chunk and then generates an end-of-chunk token. Thus, our word-level decoder is released from the pressure of memorizing the long (interchunk) dependencies and can focus on learning the short (intra-chunk) dependencies. 3.2 Model 2: Inter-Chunk Connection The second term in Eq. (15) only iterates over one chunk (j = 1 to Jk). This means that the last state and the last output of a chunk are not being fed into the word-level decoder at the next time step (see the black part in Figure 4). In other words, s(w) k,1 in Eq. (18) is always initialized before generating the first word in a chunk. This may have a bad influence on the word-level decoder because it cannot access any previous information at the first word of each chunk. To address this problem, we add new connections to Model 1 between the first state in a chunk and the last state in the previous chunk, as s(w) k,1 = GRU(s(w) k−1,Jk−1, [˜s(c) k ; yk−1,Jk−1; ck−1,Jk−1]). (21) The dashed blue arrows in Figure 4 illustrate the added inter-chunk connections. 3.3 Model 3: Word-to-Chunk Feedback The chunk-level decoder in Eq. (16) is only conditioned by s(w) k−1,Jk−1, the last word state in each chunk (see the black part in Figure 4). This may affect the chunk-level decoder because it cannot memorize what kind of information has already been generated by the word-level decoder. The information about the words in a chunk should not be included in the representation of the next chunk; otherwise, it may generate the same chunks multiple times, or forget to translate some words in the source sentence. To encourage the chunk-level decoder to memorize the information about the previous outputs more carefully, we add feedback states to our chunk-level decoder in Model 2. The feedback state in the chunk-level decoder is updated at every time step j(> 1) in k-th chunk, as s(c) k,j = GRU(s(c) k,j−1, s(w) k,j ). (22) The red part in Figure 4 illustrate the added feedback states and their connections. The connections in the thick black arrows are replaced with the dotted red arrows in Model 3. 4 Experiments 4.1 Setup Data To examine the effectiveness of our decoders, we chose Japanese, a free word-order language, as the target language. Japanese sentences are easy to break into well-defined chunks (called bunsetsus (Hashimoto, 1934) in Japanese). For example, the accuracy of bunsetsu-chunking on newspaper articles is reported to be over 99% (Murata et al., 2000; Yoshinaga and Kitsuregawa, 2014). The effect of chunking errors in training the decoder can be suppressed, which means we can accurately evaluate the potential of our method. We used the English-Japanese training corpus in the Asian Scientific Paper Excerpt Corpus (ASPEC) (Nakazawa et al., 2016), which was provided in WAT ’16. To remove inaccurate translation pairs, we extracted the first two million out of the 3 million pairs following the setting that gave the best performances in WAT ’15 (Neubig et al., 2015). Preprocessings For Japanese sentences, we performed tokenization using KyTea 0.4.72 (Neubig et al., 2011). Then we performed bunsetsuchunking with J.DepP 2015.10.053 (Yoshinaga and Kitsuregawa, 2009, 2010, 2014). Special endof-chunk tokens were inserted at the end of the chunks. Our word-level decoders described in § 3 will stop generating words after each endof-chunk token. For English sentences, we performed the same preprocessings described on the WAT ’16 Website.4 To suppress having possible 2http://www.phontron.com/kytea/ 3http://www.tkl.iis.u-tokyo.ac.jp/ ˜ynaga/jdepp/ 4http://lotus.kuee.kyoto-u.ac.jp/WAT/ baseline/dataPreparationJE.html 1905 Corpus # words # chunks # sentences Train 49,671,230 15,934,129 1,663,780 Dev. 54,287 1,790 Test 54,088 1,812 Table 1: Statistics of the target language (Japanese) in extracted corpus after preprocessing. chunking errors affect the translation quality, we removed extremely long chunks from the training data. Specifically, among the 2 million preprocessed translation pairs, we excluded sentence pairs that matched any of following conditions: (1) The length of the source sentence or target sentence is larger than 64 (3% of whole data); (2) The maximum length of a chunk in the target sentence is larger than 8 (14% of whole data); and (3) The maximum number of chunks in the target sentence is larger than 20 (3% of whole data). Table 1 shows the details of the extracted data. Postprocessing To perform unknown word replacement (Luong et al., 2015a), we built a bilingual English-Japanese dictionary from all of the three million translation pairs. The dictionary was extracted with the MGIZA++ 0.7.05 (Och and Ney, 2003; Gao and Vogel, 2008) word alignment tool by automatically extracting the alignments between English words and Japanese words. Model Architecture Any encoder can be combined with our decoders. In this work, we adopted a single-layer bidirectional GRU (Cho et al., 2014b; Bahdanau et al., 2015) as the encoder to focus on confirming the impact of the proposed decoders. We used single layer GRUs for the wordlevel decoder and the chunk-level decoder. The vocabulary sizes were set to 40k for source side and 30k for target side, respectively. The conditional probability of each target word was computed with a deep-output (Pascanu et al., 2014) layer with maxout (Goodfellow et al., 2013) units following (Bahdanau et al., 2015). The maximum number of output chunks was set to 20 and the maximum length of a chunk was set to 8. Training Details The models were optimized using ADADELTA following (Bahdanau et al., 2015). The hyperparameters of the training procedure were fixed to the values given in Table 2. Note that the learning rate was halved when the BLEU score on the development set did not in5https://github.com/moses-smt/mgiza ρ of ADADELTA 0.95 ϵ of ADADELTA 1e−6 Initial learning rate 1.0 Gradient clipping 1.0 Mini-batch size 64 dhid (dimension of hidden states) 1024 demb (dimension of word embeddings) 1024 Table 2: Hyperparameters for training. crease for 30,000 batches. All the parameters were initialized randomly with Gaussian distribution. It took about a week to train each model with an NVIDIA TITAN X (Pascal) GPU. Evaluation Following the WAT ’16 evaluation procedure, we used BLEU (Papineni et al., 2002) and RIBES (Isozaki et al., 2010) to evaluate our models. The BLEU scores were calculated with multi-bleu.pl in Moses 2.1.16 (Koehn et al., 2007); RIBES scores were calculated with RIBES.py 1.03.17 (Isozaki et al., 2010). Following Cho et al. (2014a), we performed beam search8 with length-normalized log-probability to decode target sentences. We saved the trained models that performed best on the development set during training and used them to evaluate the systems with the test set. Baseline Systems The baseline systems and the important hyperparamters are listed in Table 3. Eriguchi et al. (2016a)’s baseline system (the first line in Table 3) was the best single (w/o ensembling) word-based NMT system that were reported in WAT ’16. For a more fair evaluation, we also reimplemented a standard attention-based NMT system that uses exactly the same encoder, training procedure, and the hyperparameters as our proposed models, but has a word-based decoder. We trained this system on the training data without chunk segmentations (the second line in Table 3) and with chunk segmentations given by J.DepP (the third line in Table 3). The chunked corpus fed to the third system is exactly the same as the training data of our proposed systems (sixth to eighth lines in Table 3). In addition, we also include the Tree-to-Sequence models (Eriguchi et al., 2016a,b) (the fourth and fifth lines in Table 3) to compare the impact of capturing the structure in the source language and that in 6http://www.statmt.org/moses/ 7http://www.kecl.ntt.co.jp/icl/lirg/ ribes/index.html 8Beam size is set to 20. 1906 System Hyperparameter Dec. time Encoder Type / Decoder Type |Vsrc| |Vtrg| demb dhid BLEU RIBES [ms/sent.] Word-based / Word-based (Eriguchi et al., 2016a) 88k 66k 512 512 34.64 81.60 / Word-based (our implementation) 40k 30k 1024 1024 36.33 81.22 84.1 + chunked training data via J.DepP 40k 30k 1024 1024 35.71 80.89 101.5 Tree-based / Word-based (Eriguchi et al., 2016b) 88k 66k 512 512 34.91 81.66 (363.7)9 / Char-based (Eriguchi et al., 2016a) 88k 3k 256 512 31.52 79.39 (8.8)9 Word-based / Proposed Chunk-based (Model 1) 40k 30k 1024 1024 34.70 81.01 165.2 + Inter-chunk connection (Model 2) 40k 30k 1024 1024 35.81 81.29 165.2 + Word-to-chunk feedback (Model 3) 40k 30k 1024 1024 37.26 82.23 163.7 Table 3: The settings and results of the baseline systems and our systems. |Vsrc| and |Vtrg| denote the vocabulary size of the source language and the target language, respectively. demb and dhid are the dimension size of the word embeddings and hidden states, respectively. Only single NMT models (w/o ensembling) reported in WAT ’16 are listed here. Full results are available on the WAT ’16 Website.10 the target language. Note that all systems listed in Table 3, including our models, are single models without ensemble techniques. 4.2 Results Proposed Models vs. Baselines Table 3 shows the experimental results on the ASPEC test set. We can observe that our best model (Model 3) outperformed all the single NMT models reported in WAT ’16. The gain obtained by switching Wordbased decoder to Chunk-based decoder (+0.93 BLEU and +1.01 RIBES) is larger than the gain obtained by switching word-based encoder to Treebased encoder (+0.27 BLEU and +0.06 RIBES). This result shows that capturing the chunk structure in the target language is more effective than capturing the syntax structure in the source language. Compared with the character-based NMT model (Eriguchi et al., 2016a), our Model 3 performed better by +5.74 BLEU score and +2.84 RIBES score. One possible reason for this is that using a character-based model rather than a wordbased model makes it more difficult to capture long-distance dependencies because the length of a target sequence becomes much longer in the character-based model. Comparison between Baselines Among the five baselines, our reimplementation without chunk segmentations (the second line in Table 3) achieved the best BLEU score while the Eriguchi et al. (2016b)’s system (the fourth line in Table 3) achieved the best RIBES score. The most probable reasons for the superiority of our reimplementation over the Eriguchi et al. (2016a)’s word-based baseline (the first line in Table 3) is that the dimensions of word embeddings and hidden states in our systems are higher than theirs. Feeding chunked training data to our baseline system (the third line in Table 3) instead of a normal data caused bad effects by −0.62 BLEU score and by −0.33 RIBES score. We evaluated the chunking ability of this system by comparing the positions of end-of-chunk tokens generated by this system with the chunk boundaries obtained by J.DepP. To our surprise, this word-based decoder could output chunk separations as accurate as our proposed Model 3 (both systems achieved F1-score > 97). The results show that even a standard word-based decoder has the ability to predict chunk boundaries if they are given in training data. However, it is difficult for the word-based decoder to utilize the chunk information to improve the translation quality. Decoding Speed Although the chunk-based decoder runs 2x slower than our word-based decoder, it is still practically acceptable (6 sentences per second). The character-based decoder (the fifth line in Table 3) is less time-consuming mainly because of its small vocabulary size (|Vtrg| = 3k). Chunk-level Evaluation To confirm that our models can capture local (intra-chunk) and global (inter-chunk) word orders well, we evaluated the translation quality at the chunk level. First, we performed bunsetsu-chunking on the reference translations in the test set. Then, for both reference translations and the outputs of our systems, we combined all the words in each chunk into a single token to regard a chunk as the basic translation unit instead of a word. Finally, we computed the chunk-based BLEU (C-BLEU) and RIBES 9Tree-to-Seq models are tested on CPUs instead of GPUs. 10http://lotus.kuee.kyoto-u.ac.jp/WAT/ evaluation 1907 !"#$%&!'(#)*#"+'&',!"#$%&!'(#!.'##!'/.+012' &3'/0%,# %452&!$'+! !"#$%&!'(#"6+# !'/.+012' &3'/0%,,*# 40--0/2,! &3'/0%,,*# 40--0/2,! !"#$%&!'(#"6+# !'/.+012' &3'/0%,,*# 40--0/2,! &3'/0%,# 7%458#40--0/2,! !"#$%&!'(#!.'#!'/.+012'# )*#"+'&',90+/'#&3'/0%,,*#40--0/2,!#3"0+!&#%('#-'6#-"(#!.'#%452&!$'+!#:#0!#0&#0$3"(!%+!#!"#$%&!'(#!.'#!'/.+012'#)*#"+'&',-#; !"#$%&< '&(&$&)%&< !" # 特別に困難な$# %&' ()自分で体得すること*+, )-. / *"$+,-./&+0 !" 0 1 ' 2 #3 特別に難しい$* %&' 45 3技術のマスター化* 67)- ./ 12#)3,-./&+< 特別な45調整0 =#89 2=#:; &=#$ *=#%& '=#45 3=#自分の45技術を45習得する45こと*=#67 )-. / 6"+&78< !< 0 # =#特別な45困難な=#$ *=#%& '() 3=#自分に45よる45手技を45習得する45こと* =#67) -./ 6"+&79< !" 0 # =#特別に=#困難な=#$* =#%&' =#453 =#自分に45よる45技術の45習得*=#67 )-. / Figure 5: Translation examples. “/” denote chunk boundaries that are automatically determined by our decoders. Words colored blue and red respectively denote correct translations and wrong translations. Decoder C-BLEU C-RIBES Word-based (our implementation) 7.56 50.73 + chunked training data via J.DepP 7.40 51.18 Proposed Chunk-based (Model 1) 7.59 50.47 + Inter-chunk connection (Model 2) 7.78 51.48 + Word-to-chunk feedback (Model 3) 8.69 52.82 Table 4: Chunk-based BLEU and RIBES with the systems using the word-based encoder. (C-RIBES). The results are listed in Table 4. For the word-based decoder (the first line in Table 4), we performed bunsetsu-chunking by J.DepP on its outputs to obtain chunk boundaries. As another baseline (the second line in Table 4), we used the chunked sentences as training data instead of performing chunking after decoding. The results show that our models (Model 2 and Model 3) outperform the word-based decoders in both C-BLEU and C-RIBES. This indicates that our chunk-based decoders can produce more correct chunks in a more correct order than the word-based models. Qualitative Analysis To clarify the qualitative difference between the word-based decoder and our chunk-based decoders, we show translation examples in Figure 5. Words in blue and red respectively denote correct translations and wrong translations. The word-based decoder (our implementation) has completely dropped the translation of “by oneself.” On the other hand, Model 1 generated a slightly wrong translation “自分の技術を習得すること(to master own technique).” In addition, Model 1 has made another serious word-order error “特別な調整(special adjustment).” These results suggest that Model 1 can capture longer dependencies in a long sequence than the word-based decoder. However, Model 1 is not good at modeling global word order because it cannot access enough information about previous outputs. The weakness of modeling word order was overcome in Model 2 thanks to the inter-chunk connections. However, Model 2 still suffered from the errors of function words: it still generates a wrong chunk “特別な(special)” instead of the correct one “特別に(specially)” and a wrong chunk “よる” instead of “より.” Although these errors seem trivial, such mistakes with function words bring serious changes of sentence meaning. However, all of these problems have disappeared in Model 3. This phenomenon supports the importance of the feedback states to provide the decoder with a better ability to choose more accurate words in chunks. 5 Related Work Much work has been done on using chunk (or phrase) structure to improve machine translation quality. The most notable work involved phrasebased SMT (Koehn et al., 2003), which has been the basis for a huge amount of work on SMT for more than ten years. Apart from this, Watanabe et al. (2003) proposed a chunk-based translation model that generates output sentences in a chunkby-chunk manner. The chunk structure is effective not only for SMT but also for example-based machine translation (EBMT). Kim et al. (2010) proposed a chunk-based EBMT and showed that using chunk structures can help with finding better word alignments. Our work is different from theirs in that our models are based on NMT, but not SMT or EBMT. The decoders in the above studies can model the chunk structure by storing chunk pairs in a large table. In contrast, we do that by individually training a chunk generation model and a word prediction model with two RNNs. While most of the NMT models focus on the conversion between sequential data, some works have tried to incorporate non-sequential informa1908 tion into NMT (Eriguchi et al., 2016b; Su et al., 2017). Eriguchi et al. (2016b) use a Tree-based LSTM (Tai et al., 2015) to encode input sentence into context vectors. Given a syntactic tree of a source sentence, their tree-based encoder encodes words from the leaf nodes to the root nodes recursively. Su et al. (2017) proposed a lattice-based encoder that considers multiple tokenization results while encoding the input sentence. To prevent the tokenization errors from propagating to the whole NMT system, their attice-based encoder can utilize multiple tokenization results. These works focus on the encoding process and propose better encoders that can exploit the structures of the source language. In contrast, our work focuses on the decoding process to capture the structure of the target language. The encoders described above and our proposed decoders are complementary so they can be combined into a single network. Considering that our Model 1 described in § 3.1 can be seen as a hierarchical RNN, our work is also related to previous studies that utilize multi-layer RNNs to capture hierarchical structures in data. Hierarchical RNNs are used for various NLP tasks such as machine translation (Luong and Manning, 2016), document modeling (Li et al., 2015; Lin et al., 2015), dialog generation (Serban et al., 2017), image captioning (Krause et al., 2016), and video captioning (Yu et al., 2016). In particular, Li et al. (2015) and Luong and Manning (2016) use hierarchical encoder-decoder models, but not for the purpose of learning syntactic structures of target sentences. Li et al. (2015) build hierarchical models at the sentence-word level to obtain better document representations. Luong and Manning (2016) build the word-character level to cope with the out-of-vocabulary problem. In contrast, we build a hierarchical models at the chunk-word level to explicitly capture the syntactic structure based on chunk segmentation. In addition, the architecture of Model 3 is also related to stacked RNN, which has shown to be effective in improving the translation quality (Luong et al., 2015a; Sutskever et al., 2014). Although these architectures look similar to each other, there is a fundamental difference between the directions of the connection between two layers. A stacked RNN consists of multiple RNN layers that are connected from the input side to the output side at every time step. In contrast, our Model 3 has a different connection at each time step. Before it generates a chunk, there is a feed-forward connection from the chunk-level decoder to the word-level decoder. However, after generating a chunk representation, the connection is to be reversed to feed back the information from the word-level decoder to the chunk-level decoder. By switching the connections between two layers, our model can capture the chunk structure explicitly. This is the first work that proposes decoders for NMT that can capture plausible linguistic structures such as chunk. Finally, we noticed that (Zhou et al., 2017) (which is accepted at the same time as this paper) have also proposed a chunk-based decoder for NMT. Their good experimental result on Chinese to English translation task also indicates the effectiveness of “chunk-by-chunk” decoders. Although their architecture is similar to our Model 2, there are several differences: (1) they adopt chunk-level attention instead of word-level attention; (2) their model predicts chunk tags (such as noun phrase), while ours only predicts chunk boundaries; and (3) they employ a boundary gate to decide the chunk boundaries, while we do that by simply having the model generate end-of-chunk tokens. 6 Conclusion In this paper, we propose chunk-based decoders for NMT. As the attention mechanism in NMT plays a similar role to the translation model in phrase-based SMT, our chunk-based decoders are intended to capture the notion of chunks in chunkbased (or phrase-based) SMT. We utilize the chunk structure to efficiently capture long-distance dependencies and cope with the problem of free word-order languages such as Japanese. We designed three models that have hierarchical RNNlike architectures, each of which consists of a word-level decoder and a chunk-level decoder. We performed experiments on the WAT ’16 Englishto-Japanese translation task and found that our best model outperforms the strongest baselines by +0.93 BLEU score and by +0.57 RIBES score. In future work, we will explore the optimal structures of chunk-based decoder for other free word-order languages such as Czech, German, and Turkish. In addition, we plan to combine our decoder with other encoders that capture language structure, such as a hierarchical RNN (Luong and Manning, 2016), a Tree-LSTM (Eriguchi et al., 2016b), or an order-free encoder, such as a CNN (Kalchbrenner and Blunsom, 2013). 1909 Acknowledgements This research was partially supported by the Research and Development on Real World Big Data Integration and Analysis program of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) and RIKEN, Japan, and by the Chinese National Research Fund (NSFC) Key Project No. 61532013 and National China 973 Project No. 2015CB352401. The authors appreciate Dongdong Zhang, Shuangzhi Wu, and Zhirui Zhang for the fruitful discussions during the first and second authors were interns at Microsoft Research Asia. We also thank Masashi Toyoda and his group for letting us use their computing resources. Finally, we thank the anonymous reviewers for their careful reading of our paper and insightful comments. References Steven P. Abney. 1991. Parsing by chunks. In Principle-based parsing, Springer, pages 257–278. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the Third International Conference on Learning Representations (ICLR). Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of the Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST). pages 103–111. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1724– 1734. Fabien Cromieres, Chenhui Chu, Toshiaki Nakazawa, and Sadao Kurohashi. 2016. Kyoto university participation to WAT 2016. In Proceedings of the Third Workshop on Asian Translation (WAT). pages 166– 174. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016a. Character-based decoding in treeto-sequence attention-based neural machine translation. In Proceedings of the Third Workshop on Asian Translation (WAT). pages 175–183. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016b. Tree-to-sequence attentional neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). pages 823–833. Qin Gao and Stephan Vogel. 2008. Parallel implementations of word alignment tool. In Software Engineering, Testing, and Quality Assurance for Natural Language Processing. pages 49–57. Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. 2013. Maxout networks. In Proceedings of the 30th International Conference on Machine Learning (ICML). pages 1319–1327. Shinkichi Hashimoto. 1934. Kokugoho Yosetsu. Meiji Shoin. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic evaluation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 944–952. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1700– 1709. Jae Dong Kim, Ralf D. Brown, and Jaime G. Carbonell. 2010. Chunk-based EBMT. In Proceedings of the 14th workshop of the European Association for Machine Translation (EAMT). Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL). pages 177–180. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL). pages 48–54. Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. 2016. A hierarchical approach for generating descriptive image paragraphs. In arXiv:1611.06607 [cs.CV]. Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP). pages 1106–1115. 1910 Rui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou, and Sheng Li. 2015. Hierarchical recurrent neural network for document modeling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 899–907. Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). pages 1054–1063. Minh-Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015a. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the Seventh International Joint Conference on Natural Language Processing (ACL-IJCNLP). pages 11–19. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Effective approaches to attentionbased neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1412– 1421. Masaki Murata, Kiyotaka Uchimoto, Qing Ma, and Hitoshi Isahara. 2000. Bunsetsu identification using category-exclusive rules. In Proceedings of the 18th International Conference on Computational Linguistics (COLING). pages 565–571. Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchimoto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. 2016. ASPEC: Asian scientific paper excerpt corpus. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC). pages 2204–2208. Graham Neubig. 2016. Lexicons and minimum risk training for neural machine translation: NAISTCMU at WAT2016. In Proceedings of the Third Workshop on Asian Translation (WAT). pages 119– 125. Graham Neubig, Makoto Morishita, and Satoshi Nakamura. 2015. Neural reranking improves subjective quality of machine translation: NAIST at WAT2015. In Proceedings of the Second Workshop on Asian Translation (WAT). pages 35–41. Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable Japanese morphological analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT). pages 529–533. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics 29(1):19–51. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics (ACL). pages 311–318. Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to construct deep recurrent neural networks. In Proceedings of the Second International Conference on Learning Representations (ICLR). Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation systems for WMT 16. In Proceedings of the First Conference on Machine Translation (WMT). pages 371– 376. Iulian V. Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI). Jinsong Su, Zhixing Tan, Deyi Xiong, Rongrong Ji, Xiaodong Shi, and Yang Liu. 2017. Lattice-based recurrent neural network encoders for neural machine translation. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI). Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems (NIPS). pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP). pages 1556–1566. Taro Watanabe, Eiichiro Sumita, and Hiroshi G. Okuno. 2003. Chunk-based statistical translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL). pages 303–310. Naoki Yoshinaga and Masaru Kitsuregawa. 2009. Polynomial to linear: Efficient classification with conjunctive features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1542–1551. Naoki Yoshinaga and Masaru Kitsuregawa. 2010. Kernel slicing: Scalable online training with conjunctive features. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING). pages 1245–1253. Naoki Yoshinaga and Masaru Kitsuregawa. 2014. A self-adaptive classifier for efficient text-stream processing. In Proceedings of the 25th International 1911 Conference on Computational Linguistics (COLING). pages 1091–1102. Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu. 2016. Video paragraph captioning using hierarchical recurrent neural networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pages 4584– 4593. Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. In arXiv:1212.5701 [cs.LG]. Hao Zhou, Zhaopeng Tu, Shujian Huang, Xiaohua Liu, Hang Li, and Jiajun Chen. 2017. Chunk-based biscale decoder for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). 1912
2017
174
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1913–1924 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1175 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1913–1924 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1175 Doubly-Attentive Decoder for Multi-modal Neural Machine Translation Iacer Calixto ADAPT Centre School of Computing Dublin City University Dublin, Ireland Qun Liu ADAPT Centre School of Computing Dublin City University Dublin, Ireland {iacer.calixto,qun.liu,nick.campbell}@adaptcentre.ie Nick Campbell ADAPT Centre Speech Communication Lab Trinity College Dublin Dublin 2, Ireland Abstract We introduce a Multi-modal Neural Machine Translation model in which a doubly-attentive decoder naturally incorporates spatial visual features obtained using pre-trained convolutional neural networks, bridging the gap between image description and translation. Our decoder learns to attend to source-language words and parts of an image independently by means of two separate attention mechanisms as it generates words in the target language. We find that our model can efficiently exploit not just back-translated in-domain multi-modal data but also large general-domain text-only MT corpora. We also report state-of-the-art results on the Multi30k data set. 1 Introduction Neural Machine Translation (NMT) has been successfully tackled as a sequence to sequence learning problem (Kalchbrenner and Blunsom, 2013; Cho et al., 2014b; Sutskever et al., 2014) where each training example consists of one source and one target variable-length sequences, with no prior information on the alignment between the two. In the context of NMT, Bahdanau et al. (2015) first proposed to use an attention mechanism in the decoder, which is trained to attend to the relevant source-language words as it generates each word of the target sentence. Similarly, Xu et al. (2015) proposed an attention-based model for the task of image description generation (IDG) where a model learns to attend to specific parts of an image representation (the source) as it generates its description (the target) in natural language. We are inspired by recent successes in applying attention-based models to NMT and IDG. In this work, we propose an end-to-end attention-based multi-modal neural machine translation (MNMT) model which effectively incorporates two independent attention mechanisms, one over sourcelanguage words and the other over different areas of an image. Our main contributions are: • We propose a novel attention-based MNMT model which incorporates spatial visual features in a separate visual attention mechanism; • We use a medium-sized, back-translated multi-modal in-domain data set and large general-domain text-only MT corpora to pretrain our models and show that our MNMT model can efficiently exploit both; • We show that images bring useful information into an NMT model, e.g. in situations in which sentences describe objects illustrated in the image. To the best of our knowledge, previous MNMT models in the literature that utilised spatial visual features did not significantly improve over a comparable model that used global visual features or even only textual features (Caglayan et al., 2016a; Calixto et al., 2016; Huang et al., 2016; Libovick´y et al., 2016; Specia et al., 2016). In this work, we wish to address this issue and propose an MNMT model that uses, in addition to an attention mechanism over the source-language words, an additional visual attention mechanism to incorporate spatial visual features, and still improves on simpler text-only and multi-modal attention-based NMT models. The remainder of this paper is structured as follows. We first briefly revisit the attentionbased NMT framework (§2) and expand it into an MNMT framework (§3). In §4, we introduce the 1913 datasets we use to train and evaluate our models, in §5 we discuss our experimental setup and analyse and discuss our results. Finally, in §6 we discuss relevant related work and in §7 we draw conclusions and provide avenues for future work. 2 Background and Notation 2.1 Attention-based NMT In this section, we describe the attention-based NMT model introduced by Bahdanau et al. (2015). Given a source sequence X = (x1, x2, · · · , xN) and its translation Y = (y1, y2, · · · , yM), an NMT model aims to build a single neural network that translates X into Y by directly learning to model p(Y | X). The entire network consists of one encoder and one decoder with one attention mechanism, typically implemented as two Recurrent Neural Networks (RNN) and one multilayer perceptron, respectively. Each xi is a row index in a source lookup or word embedding matrix Ex ∈R|Vx|×dx, as well as each yj being an index in a target lookup or word embedding matrix Ey ∈R|Vy|×dy, Vx and Vy are source and target vocabularies, and dx and dy are source and target word embeddings dimensionalities, respectively. The encoder is a bi-directional RNN with GRU (Cho et al., 2014a), where a forward RNN −→ Φ enc reads X word by word, from left to right, and generates a sequence of forward annotation vectors (−→ h 1, −→ h 2, · · · , −→ h N) at each encoder time step i ∈[1, N]. Similarly, a backward RNN ←− Φ enc reads X from right to left, word by word, and generates a sequence of backward annotation vectors (←− h N, ←− h N−1, · · · , ←− h 1). The final annotation vector is the concatenation of forward and backward vectors hi = −→ hi; ←− hi  , and C = (h1, h2, · · · , hN) is the set of source annotation vectors. These annotation vectors are in turn used by the decoder, which is essentially a neural language model (LM) (Bengio et al., 2003) conditioned on the previously emitted words and the source sentence via an attention mechanism. A multilayer perceptron is used to initialise the decoder’s hidden state s0 at time step t = 0, where the input to this network is the concatenation of the last forward and backward vectors −→ hN; ←− h1  . At each time step t of the decoder, a timedependent source context vector ct is computed based on the annotation vectors C and the decoder previous hidden state st−1. This is part of the formulation of the conditional GRU and is described further in §2.2. In other words, the encoder is a bi-directional RNN with GRU and the decoder is an RNN with a conditional GRU. Given a hidden state st, the probabilities for the next target word are computed using one projection layer followed by a softmax layer as illustrated in eq. (1), where the matrices Lo, Ls, Lw and Lc are transformation matrices and ct is a time-dependent source context vector generated by the conditional GRU. 2.2 Conditional GRU The conditional GRU1, illustrated in Figure 1, has three main components computed at each time step t of the decoder: • REC1 computes a hidden state proposal s′ t based on the previous hidden state st−1 and the previously emitted word ˆyt−1; • ATTsrc2 is an attention mechanism over the hidden states of the source-language RNN and computes ct using all source annotation vectors C and the hidden state proposal s′ t; • REC2 computes the final hidden state st using the hidden state proposal s′ t and the timedependent source context vector ct. First, a single-layer feed-forward network is used to compute an expected alignment esrc t,i between each source annotation vector hi and the target word ˆyt to be emitted at the current time step t, as shown in Equations (2) and (3): esrc t,i = (vsrc a )T tanh(U src a s′ t + W src a hi), (2) αsrc t,i = exp (esrc t,i) PN j=1 exp (esrc t,j) , (3) where αsrc t,i is the normalised alignment matrix between each source annotation vector hi and the word ˆyt to be emitted at time step t, and vsrc a , U src a and W src a are model parameters. Finally, a time-dependent source context vector ct is computed as a weighted sum over the source annotation vectors, where each vector is weighted by the attention weight αsrc t,i, as in eq. (4): ct = N X i=1 αsrc t,ihi. (4) 1https://github.com/nyu-dl/ dl4mt-tutorial/blob/master/docs/cgru.pdf. 2ATTsrc is named ATT in the original technical report. 1914 p(yt = k | y<t, ct) ∝exp(Lo tanh(Lsst + LwEy[ˆyt−1] + Lcct)). (1) Figure 1: An illustration of the conditional GRU: the steps taken to compute the current hidden state st from the previous state st−1, the previously emitted word ˆyt−1, and the source annotation vectors C, including the candidate hidden state s′ t and the source-language attention vector ct. 3 Multi-modal NMT Our MNMT model can be seen as an expansion of the attention-based NMT framework described in §2.1 with the addition of a visual component to incorporate spatial visual features. We use publicly available pre-trained CNNs for image feature extraction. Specifically, we extract spatial image features for all images in our dataset using the 50-layer Residual network (ResNet-50) of He et al. (2015). These spatial features are the activations of the res4f layer, which can be seen as encoding an image in a 14×14 grid, where each of the entries in the grid is represented by a 1024D feature vector that only encodes information about that specific region of the image. We vectorise this 3-tensor into a 196×1024 matrix A = (a1, a2, · · · , aL), al ∈R1024 where each of the L = 196 rows consists of a 1024D feature vector and each column, i.e. feature vector, represents one grid in the image. 3.1 NMTSRC+IMG: decoder with two independent attention mechanisms Model NMTSRC+IMG integrates two separate attention mechanisms over the source-language words and visual features in a single decoder RNN. Our doubly-attentive decoder RNN is conditioned on the previous hidden state of the decoder and the previously emitted word, as well as the source sentence and the image via two independent attention mechanisms, as illustrated in Figure 2. We implement this idea expanding the conditional GRU described in §2.2 onto a doublyconditional GRU. To that end, in addition to the source-language attention, we introduce a new attention mechanism ATTimg to the original conditional GRU proposal. This visual attention computes a time-dependent image context vector it given a hidden state proposal s′ t and the image annotation vectors A = (a1, a2, · · · , aL) using the “soft” attention (Xu et al., 2015). This attention mechanism is very similar to the source-language attention with the addition of a gating scalar, explained further below. First, a single-layer feed-forward network is used to compute an expected alignment eimg t,l between each image annotation vector al and the target word to be emitted at the current time step t, as in eqs. (5) and (6): eimg t,l = (vimg a )T tanh(U img a s′ t + W img a al), (5) αimg t,l = exp (eimg t,l ) PL j=1 exp (eimg t,j ) , (6) where αimg t,l is the normalised alignment matrix between all the image patches al and the target word to be emitted at time step t, and vimg a , U img a and W img a are model parameters. Note that Equations (2) and (3), that compute the expected source alignment esrc t,i and the weight matrices αsrc t,i, and eqs. (5) and (6) that compute the expected image alignment eimg t,l and the weight matrices αimg t,l , both compute similar statistics over the source and image annotations, respectively. In eq. (7) we compute βt ∈[0, 1], a gating scalar used to weight the expected importance of the image context vector in relation to the next target word at time step t: βt = σ(Wβst−1 + bβ), (7) where Wβ, bβ are model parameters. It is in turn used to compute the time-dependent image context vector it for the current decoder time step t, as in eq. (8): it = βt L X l=1 αimg t,l al. (8) 1915 Figure 2: A doubly-attentive decoder learns to attend to image patches and source-language words independently when generating translations. The only difference between Equations (4) (source context vector) and (8) (image context vector) is that the latter uses a gating scalar, whereas the former does not. We use β following Xu et al. (2015) who empirically found it to improve the variability of the image descriptions generated with their model. Finally, we use the time-dependent image context vector it as an additional input to a modified version of REC2 (§2.2), which now computes the final hidden state st using the hidden state proposal s′ t, and the time-dependent source and image context vectors ct and it, as in eq. (9): zt = σ(W src z ct + W img z it + Uzs′ j), rt = σ(W src r ct + W img r it + Urs′ j), st = tanh(W srcct + W imgit + rt ⊙(Us′ t)), st = (1 −zt) ⊙st + zt ⊙s′ t. (9) In Equation (10), the probabilities for the next target word are computed using the new multimodal hidden state st, the previously emitted word ˆyt−1, and the two context vectors ct and it, where Lo, Ls, Lw, Lcs and Lci are projection matrices and trained with the model. 4 Data The Flickr30k data set contains 30k images and 5 descriptions in English for each image (Young et al., 2014). In this work, we use the Multi30k dataset (Elliott et al., 2016), which consists of two multilingual expansions of the original Flickr30k: one with translated data and another one with comparable data, henceforth referred to as M30kT and M30kC, respectively. For each of the 30k images in the Flickr30k, the M30kT has one of the English descriptions manually translated into German by a professional translator. Training, validation and test sets contain 29k, 1,014 and 1k images respectively, each accompanied by a sentence pair (the original English sentence and its translation into German). For each of the 30k images in the Flickr30k, the M30kC has five descriptions in German collected independently from the English descriptions. Training, validation and test sets contain 29k, 1,014 and 1k images respectively, each accompanied by five sentences in English and five sentences in German. We use the entire M30kT training set for training our MNMT models, its validation set for model selection with BLEU (Papineni et al., 2002), and its test set for evaluation. In addition, since the amount of training data available is small, we build a back-translation model using the text-only NMT model described in §2.1 trained on the Multi30kT data set (German→English and English→German), without images. We use this model to back-translate the 145k German (English) descriptions in the Multi30kC into English (German) and include the triples (synthetic English description, German description, image) when translating into German, and the triples (synthetic German description, English description, image) when translating into English, as additional training data (Sennrich et al., 2016a). We also use the WMT 2015 text-only parallel corpora available for the English–German language pair, consisting of about 4.3M sentence pairs (Bojar et al., 2015). These include the Eu1916 p(yt = k | y<t, C, A) ∝exp(Lo tanh(Lsst + LwEy[ˆyt−1] + Lcsct + Lciit)). (10) roparl v7 (Koehn, 2005), News Commentary and Common Crawl corpora, which are concatenated and used for pre-training. We use the scripts in the Moses SMT Toolkit (Koehn et al., 2007) to normalise and tokenize English and German descriptions, and we also convert space-separated tokens into subwords (Sennrich et al., 2016b). All models use a common vocabulary of 83, 093 English and 91, 141 German subword tokens. If sentences in English or German are longer than 80 tokens, they are discarded. We train models to translate from English into German, as well as for German into English, and report evaluation of cased, tokenized sentences with punctuation. 5 Experimental setup Our encoder is a bidirectional RNN with GRU, one 1024D single-layer forward and one 1024D single-layer backward RNN. Source and target word embeddings are 620D each and trained jointly with the model. Word embeddings and other non-recurrent matrices are initialised by sampling from a Gaussian N(0, 0.012), recurrent matrices are random orthogonal and bias vectors are all initialised to zero. Visual features are obtained by feeding images to the pre-trained ResNet-50 and using the activations of the res4f layer (He et al., 2015). We apply dropout with a probability of 0.5 in the encoder bidirectional RNN, the image features, the decoder RNN and before emitting a target word. We follow Gal and Ghahramani (2016) and apply dropout to the encoder bidirectional and the decoder RNN using one same mask in all time steps. All models are trained using stochastic gradient descent with ADADELTA (Zeiler, 2012) with minibatches of size 80 (text-only NMT) or 40 (MNMT), where each training instance consists of one English sentence, one German sentence and one image (MNMT). We apply early stopping for model selection based on BLEU4, so that if a model does not improve on BLEU4 in the validation set for more than 20 epochs, training is halted. The translation quality of our models is evaluated quantitatively in terms of BLEU4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), TER (Snover et al., 2006), and chrF3 (Popovi´c, 2015).3 We report statistical significance with approximate randomisation for the first three metrics with MultEval (Clark et al., 2011). 5.1 Baselines We train a text-only phrase-based SMT (PBSMT) system and a text-only NMT model for comparison (English→German and German→English). Our PBSMT baseline is built with Moses and uses a 5–gram LM with modified Kneser-Ney smoothing (Kneser and Ney, 1995). It is trained on the English→German (German→English) descriptions of the M30kT, whereas its LM is trained on the German (English) descriptions only. We use minimum error rate training to tune the model with BLEU (Och, 2003). The text-only NMT baseline is the one described in §2.1 and is trained on the M30kT’s English–German descriptions, again in both language directions. When translating into German, we also compare our model against two publicly available results obtained with multi-modal attention-based NMT models. The first model is Huang et al. (2016)’s best model trained on the same data, and the second is their best model using additional object detections, respectively models m1 (image at head) and m3 in the authors’ paper. 5.2 Results In Table 1, we show results for the two textonly baselines NMT and PBSMT, the multimodal models of Huang et al. (2016), and our MNMT models trained on the M30kT and pretrained on the in-domain back-translated M30kC and the general-domain text-only English-German MT corpora from WMT 2015. All models are trained to translate from English into German. Training on M30kT One main finding is that our model consistently outperforms the comparable model of Huang et al. (2016) when translating into German, with improvements of +1.4 BLEU and +2.7 METEOR. In fact, even when their model has access to more data our model still improves by +0.9 METEOR. Moreover, we can also conclude from Table 1 that PBSMT performs better at recall-oriented 3We specifically compute character 6-gram F3, and additionally character precision and recall for comparison. 1917 English→German Model Training BLEU4↑ METEOR↑ TER↓ chrF3↑(prec. / recall) data NMT M30kT 33.7 52.3 46.7 65.2 (67.7 / 65.0) PBSMT M30kT 32.9 54.3† 45.1† 67.4 (66.5 / 67.5) Huang et al. (2016) M30kT 35.1 (↑1.4) 52.2 (↓2.1) — — — + RCNN 36.5 (↑2.8) 54.1 (↓0.2) — — — NMTSRC+IMG M30kT 36.5†‡ 55.0† 43.7†‡ 67.3 (66.8 / 67.4) Improvements NMTSRC+IMG vs. NMT ↑2.8 ↑2.7 ↓3.0 ↑2.1 ↓0.9 / ↑2.4 NMTSRC+IMG vs. PBSMT ↑3.6 ↑0.7 ↓1.4 ↓0.1 ↑0.3 / ↓0.1 NMTSRC+IMG vs. Huang ↑1.4 ↑2.8 — — — NMTSRC+IMG vs. Huang (+RCNN) ↑0.0 ↑0.9 — — — Pre-training data set: back-translated M30kC (in-domain) PBSMT (LM) M30kT 34.0 ↑0.0 55.0† ↑0.0 44.7 ↑0.0 68.0 (66.8 / 68.1) NMT M30kT 35.5‡ ↑0.0 53.4 ↑0.0 43.3‡ ↑0.0 65.2 (67.7 / 65.0) NMTSRC+IMG M30kT 37.1†‡ 54.5†‡ 42.8†‡ 66.6 (67.2 / 66.5) NMTSRC+IMG vs. best PBSMT ↑3.1 ↓0.5 ↓1.9 ↓1.4 ↑0.4 / ↓1.6 NMTSRC+IMG vs. NMT ↑1.6 ↑1.1 ↓0.5 ↑1.4 ↓0.5 / ↑1.5 Pre-training data set: WMT’15 English-German corpora (general domain) PBSMT (concat) M30kT 32.6 53.9 46.1 67.3 (66.3 / 67.4) PBSMT (LM) M30kT 32.5 54.1 46.0 67.3 (66.0 / 67.4) NMT M30kT 37.8† ↑0.0 56.7† ↑0.0 41.0† ↑0.0 69.2 (69.7 / 69.1) NMTSRC+IMG M30kT 39.0†‡ 56.8†‡ 40.6†‡ 69.6 (69.6 / 69.6) NMTSRC+IMG vs. best PBSMT ↑6.4 ↑2.7 ↓5.4 ↑2.3 ↑3.3 / ↑2.2 NMTSRC+IMG vs. NMT ↑1.2 ↑0.1 ↓0.4 ↑0.4 ↓0.1 / ↑0.5 Table 1: BLEU4, METEOR, chrF3, character-level precision and recall (higher is better) and TER scores (lower is better) on the translated Multi30k (M30kT) test set. Best text-only baselines results are underlined and best overall results appear in bold. We show Huang et al. (2016)’s improvements over the best text-only baseline in parentheses. Results are significantly better than the NMT baseline (†) and the SMT baseline (‡) with p < 0.01 (no pre-training) or p < 0.05 (when pre-training either on the back-translated M30kC or WMT’15 corpora). metrics, i.e. METEOR and chrF3, whereas NMT is better at precision-oriented ones, i.e. BLEU4. This is somehow expected, since the attention mechanism in NMT (Bahdanau et al., 2015) does not explicitly take attention weights from previous time steps into account, an thus lacks the notion of source coverage as in SMT (Koehn et al., 2003; Tu et al., 2016). We note that these ideas are complementary and incorporating coverage into model NMTSRC+IMG could lead to more improvements, especially in recall-oriented metrics. Nonetheless, our doubly-attentive model shows consistent gains in both precision- and recall-oriented metrics in comparison to the text-only NMT baseline, i.e. it is significantly better according to BLEU4, METEOR and TER (p < 0.01), and it also improves chrF3 by +2.1. In comparison to the PBSMT baseline, our proposed model still significantly improves according to both BLEU4 and TER (p < 0.01), also increasing METEOR by +0.7 but with an associated p-value of p = 0.071, therefore not significant for p < 0.05. Although chrF3 is the only metric in which the PBSMT model scores best, the difference between our model and the latter is only 0.1, meaning that they are practically equivalent. We note that model NMTSRC+IMG consistently increases character recall in comparison to the text-only NMT baseline. Although it can happen at the expense of character precision, gains in recall are always much higher than any eventual loss in precision, leading to consistent improvements in chrF3. In Table 2, we observe that when translating into English and training on the original M30kT, model NMTSRC+IMG outperforms both baselines by a large margin, according to all four metrics evaluated. We also note that both model NMTSRC+IMG’s character-level precision and re1918 German→English Model BLEU4↑ METEOR↑ TER↓ chrF3↑ PBSMT 32.8 34.8 43.9 61.8 NMT 38.2 35.8 40.2 62.8 NMTSRC+IMG 40.6†‡ 37.5†‡ 37.7†‡ 65.2 Improvements Ours vs. NMT ↑2.4 ↑1.7 ↓2.5 ↑2.4 Ours vs. PBSMT ↑7.8 ↑2.7 ↓6.2 ↑3.4 Pre-training data set: back-translated M30kC (in-domain) PBSMT 36.8 36.4 40.8 64.5 NMT 42.6 38.9 36.1 67.6 NMTSRC+IMG 43.2‡† 39.0‡† 35.5‡† 67.7 Improvements Ours vs. PBSMT ↑6.4 ↑2.6 ↓5.3 ↑3.2 Ours vs. NMT ↑0.6 ↑0.1 ↓0.6 ↑0.1 Table 2: BLEU4, METEOR, chrF3 (higher is better), and TER scores (lower is better) on the translated Multi30k (M30kT) test set. Best text-only baselines results are underlined and best overall results appear in bold. Results are significantly better than the NMT baseline (†) and the SMT baseline (‡) with p < 0.01. call are higher than those of the two baselines, in contrast to results obtained when translating from English into German. This suggests that model NMTSRC+IMG might better integrate the image features when translating into an “easier” language, i.e. a language with less morphology, although experiments involving more language pairs are necessary to confirm whether this is indeed the case. Pre-training We now discuss results for models pre-trained using different data sets. We first pre-trained the two text-only baselines PBSMT and NMT, and our MNMT model on the backtranslated M30kC, a medium-sized in-domain image description data set (145k training instances), in both directions. We also pre-trained the same models on the English–German parallel sentences of much larger MT data sets, i.e. the concatenation of the Europarl (Koehn, 2005), Common Crawl and News Commentary corpora, used in WMT 2015 (∼4.3M parallel sentences). Model PBSMT (concat.) used the concatenation of the pretraining and training data for training, and model PBSMT (LM) used the general-domain German sentences as additional data to train the LM. From Tables 1 and 2, it is clear that model NMTSRC+IMG can learn from both in-domain, multi-modal pretraining data sets as well as text-only, general domain ones. Pre-training on M30kC When pre-training on the back-translated M30kC and translating into German, the recall-oriented chrF3 shows a difference of 1.4 points between PBSMT and our model, mostly due to character recall; nonetheless, our model still improved by the same margin on the text-only NMT baseline. Our model still outperforms the PBSMT baseline according to BLEU4 and TER, and the text-only NMT baseline according to all metrics (p < .05). When translating into English, model NMTSRC+IMG still consistently scores higher according to all metrics evaluated, although the differences between its translations and those obtained with the NMT baseline are no longer statistically significant (p < 0.01). Pre-training on WMT 2015 corpora We also pre-trained our English–German models on the WMT 2015 corpora, which took 10 days, i.e. ∼6–7 epochs. Results show that model NMTSRC+IMG improves significantly over the NMT baseline according to BLEU4, and is consistently better than the PBSMT baseline according to all four metrics.4 This is a strong indication that model NMTSRC+IMG can exploit the additional pre-training data efficiently, both generaland in-domain. While the PBSMT model is still competitive when using additional in-domain data—according to METEOR and chrF3— the same cannot be said when using general-domain pre-training corpora. From our experiments, NMT models in general, and especially model NMTSRC+IMG, thrive when training and test domains are mixed, which is a very common realworld scenario. Textual and visual attention In Figure 3, we visualise the visual and textual attention weights for an entry of the M30kT test set. In the visual attention, the β gate (written in parentheses after each word) caused the image features to be used mostly to generate the words Mann (man) and Hut (hat), two highly visual terms in the sentence. We observe that in general visually grounded terms, e.g. Mann and Hut, usually have a high associated β value, whereas other less visual terms like mit (with) or auf (at) do not. That causes the model to use the image features when it is describing a visual concept in the sentence, which is an interest4In order for PBSMT models to remain competitive, we believe more advanced data selection techniques are needed, which are out of the scope of this work. 1919 (a) Image–target word alignments. (b) Source–target word alignments. Figure 3: Visualisation of image– and source–target word alignments for the M30kT test set. ing feature of our model. Interestingly, our model is very selective when choosing to use image features: it only assigned β > 0.5 for 20% of the outputted target words, and β > 0.8 to only 8%. A manual inspection of translations shows that these words are mostly concrete nouns with a strong visual appeal. Lastly, using two independent attention mechanisms is a good compromise between model compactness and flexibility. While the attentionbased NMT model baseline has ∼200M parameters, model NMTSRC+IMG has ∼213M, thus using just ∼6.6% more parameters than the latter. 6 Related work Multi-modal MT was just recently addressed by the MT community by means of a shared task (Specia et al., 2016). However, there has been a considerable amount of work on natural language generation from non-textual inputs. Mao et al. (2014) introduced a multi-modal RNN that integrates text and visual features and applied it to the tasks of image description generation and image–sentence ranking. In their work, the authors incorporate global image features in a separate multi-modal layer that merges the RNN textual representations and the global image features. Vinyals et al. (2015) proposed an influential neural IDG model based on the sequenceto-sequence framework, which is trained end-toend. Elliott et al. (2015) put forward a model to generate multilingual descriptions of images by learning and transferring features between two independent, non-attentive neural image description models.5 Venugopalan et al. (2015) introduced a model trained end-to-end to generate textual descriptions of open-domain videos from the video frames based on the sequence-to-sequence framework. Finally, Xu et al. (2015) introduced the first attention-based IDG model where an attentive decoder learns to attend to different parts of an image as it generates its description in natural language. In the context of NMT, Zoph and Knight (2016) introduced a multi-source attention-based NMT model trained to translate a pair of sentences in two different source languages into a target language, and reported considerable improvements over a single-source baseline. Dong et al. (2015) proposed a multi-task learning approach where a model is trained to translate from one source language into multiple target languages. Firat et al. (2016) put forward a multi-way model trained to translate between many different source and target languages. Instead of one attention mechanism per language pair as in Dong et al. (2015), which would lead to a quadratic number of attention mechanisms in relation to language pairs, they use a shared attention mechanism where each target language has one attention shared by all source languages. Luong et al. (2016) proposed a multitask approach where they train a model using two tasks and a shared decoder: the main task is to translate from German into English and the sec5Although their model has not been devised with translation as its primary goal, theirs is one of the baselines of the first shared task in multi-modal MT in WMT 2016 (Specia et al., 2016). 1920 ondary task is to generate English image descriptions. They show improvements in the main translation task when also training for the secondary image description task. Although not an NMT model, Hitschler et al. (2016) recently used image features to re-rank translations of image descriptions generated by an SMT model and reported significant improvements. Although no purely neural multi-modal model to date significantly improves on both text-only NMT and SMT models (Specia et al., 2016), different research groups have proposed to include global and spatial visual features in re-ranking n-best lists generated by an SMT system or directly in an NMT framework with some success (Caglayan et al., 2016a; Calixto et al., 2016; Huang et al., 2016; Libovick´y et al., 2016; Shah et al., 2016). To the best of our knowledge, the best published results of a purely MNMT model are those of Huang et al. (2016), who proposed to use global visual features extracted with the VGG19 network (Simonyan and Zisserman, 2015) for an entire image, and also for regions of the image obtained using the RCNN of Girshick et al. (2014). Their best model improves over a strong text-only NMT baseline and is comparable to results obtained with an SMT model trained on the same data. For that reason, their models are used as baselines in our experiments whenever possible. Our work differs from previous work in that, first, we propose attention-based MNMT models. This is an important difference since the use of attention in NMT has become standard and is the current state-of-the-art (Jean et al., 2015; Luong et al., 2015; Firat et al., 2016; Sennrich et al., 2016b). Second, we propose a doublyattentive model where we effectively fuse two mono-modal attention mechanisms into one multimodal decoder, training the entire model jointly and end-to-end. Additionally, we are interested in how to merge textual and visual representations into multi-modal representations when generating words in the target language, which differs substantially from text-only translation tasks even when these translate from many source languages and/or into many target languages (Dong et al., 2015; Firat et al., 2016; Zoph and Knight, 2016). To the best of our knowledge, we are among the first6 to integrate multi-modal inputs in NMT via 6As pointed out by an anonymous reviewer, Caglayan et al. (2016b) have also experimented with attention-based independent attention mechanisms. Applications Initial experiments with model NMTSRC+IMG have been reported in Calixto et al. (2016). Additionally, NMTSRC+IMG has been applied to the machine translation of user-generated product listings from an e-commerce website, while also making use of the product images to improve translations (Calixto et al., 2017b,a). 7 Conclusions and Future Work We have introduced a novel attention-based, multi-modal NMT model to incorporate spatial visual information into NMT. We have reported state-of-the-art results on the M30kT test set, improving on previous multi-modal attention-based models. We have also showed that our model can be efficiently pre-trained on both mediumsized back-translated in-domain multi-modal data as well as also large general-domain text-only MT corpora, finding that it is able to exploit the additional data regardless of the domain. Our model also compares favourably to both NMT and PBSMT baselines evaluated on the same training data. In the future, we will incorporate coverage into our model and study how to apply it to other Natural Language Processing tasks. Acknowledgements This project has received funding from Science Foundation Ireland in the ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Dublin City University funded under the SFI Research Centres Programme (Grant 13/RC/2106) co-funded under the European Regional Development Fund and the European Union Horizon 2020 research and innovation programme under grant agreement 645452 (QT21). The authors would like to thank Chris Hokamp, Peyman Passban, and Dasha Bogdanova for insightful discussions at early stages of this work, Andy Way for proofreading and providing many good suggestions of improvements, as well as our anonymous reviewers for their valuable comments and feedback. Reproducibility Code and pre-trained models for this paper are available at https://github. com/iacercalixto/nmt_doubly_ attentive. multi-modal NMT. 1921 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations, ICLR 2015. San Diego, California. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A Neural Probabilistic Language Model. J. Mach. Learn. Res. 3:1137–1155. http://dl.acm.org/citation.cfm?id=944919.944966. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation. Lisbon, Portugal, pages 1–46. http://aclweb.org/anthology/W15-3001. Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes Garc´ıa-Mart´ınez, Fethi Bougares, Lo¨ıc Barrault, and Joost van de Weijer. 2016a. Does multimodality help human and machine for translation and image captioning? In Proceedings of the First Conference on Machine Translation. Berlin, Germany, pages 627–633. http://www.aclweb.org/anthology/W/W16/W162358. Ozan Caglayan, Lo¨ıc Barrault, and Fethi Bougares. 2016b. Multimodal Attention for Neural Machine Translation. CoRR abs/1609.03976. http://arxiv.org/abs/1609.03976. Iacer Calixto, Desmond Elliott, and Stella Frank. 2016. DCU-UvA Multimodal MT System Report. In Proceedings of the First Conference on Machine Translation. Berlin, Germany, pages 634–638. http://www.aclweb.org/anthology/W/W16/W162359. Iacer Calixto, Daniel Stein, Evgeny Matusov, Sheila Castilho, and Andy Way. 2017a. Human Evaluation of Multi-modal Neural Machine Translation: A Case-Study on E-Commerce Listing Titles. In Proceedings of the Sixth Workshop on Vision and Language. Valencia, Spain, pages 31–37. http://www.aclweb.org/anthology/W17-2004. Iacer Calixto, Daniel Stein, Evgeny Matusov, Pintu Lohar, Sheila Castilho, and Andy Way. 2017b. Using Images to Improve Machine-Translating ECommerce Product Listings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Valencia, Spain, pages 637–643. http://www.aclweb.org/anthology/E17-2101. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machine translation: Encoder–decoder approaches. Syntax, Semantics and Structure in Statistical Translation. page 103. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using rnn encoder– decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar, pages 1724–1734. http://www.aclweb.org/anthology/D14-1179. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2. Portland, Oregon, HLT ’11, pages 176–181. http://dl.acm.org/citation.cfm?id=2002736.2002774. Michael Denkowski and Alon Lavie. 2014. Meteor Universal: Language Specific Translation Evaluation for Any Target Language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-Task Learning for Multiple Language Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China, pages 1723– 1732. http://www.aclweb.org/anthology/P15-1166. Desmond Elliott, Stella Frank, and Eva Hasler. 2015. Multi-Language Image Description with Neural Sequence Models. CoRR abs/1510.04709. http://arxiv.org/abs/1510.04709. Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30K: Multilingual English-German Image Descriptions. In Proceedings of the 5th Workshop on Vision and Language, VL@ACL 2016. Berlin, Germany. http://aclweb.org/anthology/W/W16/W163210.pdf. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-Way, Multilingual Neural Machine Translation with a Shared Attention Mechanism. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego, California, pages 866–875. http://www.aclweb.org/anthology/N16-1101. Yarin Gal and Zoubin Ghahramani. 2016. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. In Advances in Neural Information Processing Systems, NIPS, Barcelona, Spain, 1922 pages 1019–1027. http://papers.nips.cc/paper/6241a-theoretically-grounded-application-of-dropout-inrecurrent-neural-networks.pdf. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC, USA, CVPR ’14, pages 580–587. https://doi.org/10.1109/CVPR.2014.81. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 . Julian Hitschler, Shigehiko Schamoni, and Stefan Riezler. 2016. Multimodal Pivots for Image Caption Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany, pages 2399–2409. http://www.aclweb.org/anthology/P16-1227. Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. 2016. Attention-based Multimodal Neural Machine Translation. In Proceedings of the First Conference on Machine Translation. Berlin, Germany, pages 639–645. http://www.aclweb.org/anthology/W/W16/W162360. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On Using Very Large Target Vocabulary for Neural Machine Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China, pages 1–10. http://www.aclweb.org/anthology/P15-1001. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Translation Models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013. Seattle, US., pages 1700–1709. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Detroit, Michigan, volume I, pages 181–184. Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Proceedings: the tenth Machine Translation Summit. AAMT, AAMT, Phuket, Thailand, pages 79–86. http://mt-archive.info/MTS-2005-Koehn.pdf. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. Prague, Czech Republic, ACL ’07, pages 177–180. http://dl.acm.org/citation.cfm?id=1557769.1557821. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-based Translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1. Edmonton, Canada, NAACL ’03, pages 48– 54. https://doi.org/10.3115/1073445.1073462. Jindˇrich Libovick´y, Jindˇrich Helcl, Marek Tlust´y, Ondˇrej Bojar, and Pavel Pecina. 2016. CUNI System for WMT16 Automatic Post-Editing and Multimodal Translation Tasks. In Proceedings of the First Conference on Machine Translation. Berlin, Germany, pages 646–654. http://www.aclweb.org/anthology/W/W16/W162361. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-Task Sequence to Sequence Learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2016. San Juan, Puerto Rico. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Lisbon, Portugal, pages 1412–1421. Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan L. Yuille. 2014. Explain Images with Multimodal Recurrent Neural Networks. http://arxiv.org/abs/1410.1090. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1. Sapporo, Japan, ACL ’03, pages 160–167. https://doi.org/10.3115/1075096.1075117. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Philadelphia, Pennsylvania, ACL ’02, pages 311–318. https://doi.org/10.3115/1073083.1073135. Maja Popovi´c. 2015. chrf: character n-gram fscore for automatic mt evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation. Lisbon, Portugal, pages 392–395. http://aclweb.org/anthology/W15-3049. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving Neural Machine Translation 1923 Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany, pages 86–96. http://www.aclweb.org/anthology/P16-1009. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany, pages 1715–1725. http://www.aclweb.org/anthology/P16-1162. Kashif Shah, Josiah Wang, and Lucia Specia. 2016. SHEF-Multimodal: Grounding Machine Translation on Images. In Proceedings of the First Conference on Machine Translation. Berlin, Germany, pages 660–665. http://www.aclweb.org/anthology/W/W16/W162363. K. Simonyan and A. Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR). San Diego, CA. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In In Proceedings of Association for Machine Translation in the Americas. Cambridge, MA, pages 223–231. Lucia Specia, Stella Frank, Khalil Sima’an, and Desmond Elliott. 2016. A Shared Task on Multimodal Machine Translation and Crosslingual Image Description. In Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016. Berlin, Germany, pages 543– 553. http://aclweb.org/anthology/W/W16/W162346.pdf. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems. Montr´eal, Canada, pages 3104–3112. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling Coverage for Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany, pages 76–85. http://www.aclweb.org/anthology/P16-1008. Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond J. Mooney, Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence video to text. In 2015 IEEE International Conference on Computer Vision, ICCV 2015. Santiago, Chile, pages 4534–4542. https://doi.org/10.1109/ICCV.2015.515. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015. Boston, Massachusetts, pages 3156–3164. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15). JMLR Workshop and Conference Proceedings, Lille, France, pages 2048–2057. http://jmlr.org/proceedings/papers/v37/xuc15.pdf. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics 2:67–78. Matthew D. Zeiler. 2012. ADADELTA: An Adaptive Learning Rate Method. CoRR abs/1212.5701. http://arxiv.org/abs/1212.5701. Barret Zoph and Kevin Knight. 2016. Multi-Source Neural Translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego, California, pages 30–34. http://www.aclweb.org/anthology/N161004. 1924
2017
175
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1925–1935 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1176 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1925–1935 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1176 A Teacher-Student Framework for Zero-Resource Neural Machine Translation Yun Chen†, Yang Liu‡∗, Yong Cheng+, Victor O.K. Li† †Department of Electrical and Electronic Engineering, The University of Hong Kong ‡State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing, China Jiangsu Collaborative Innovation Center for Language Competence, Jiangsu, China +Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China [email protected]; [email protected]; [email protected]; [email protected] Abstract While end-to-end neural machine translation (NMT) has made remarkable progress recently, it still suffers from the data scarcity problem for low-resource language pairs and domains. In this paper, we propose a method for zero-resource NMT by assuming that parallel sentences have close probabilities of generating a sentence in a third language. Based on the assumption, our method is able to train a source-to-target NMT model (“student”) without parallel corpora available guided by an existing pivot-to-target NMT model (“teacher”) on a source-pivot parallel corpus. Experimental results show that the proposed method significantly improves over a baseline pivot-based model by +3.0 BLEU points across various language pairs. 1 Introduction Neural machine translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015), which directly models the translation process in an end-to-end way, has attracted intensive attention from the community. Although NMT has achieved state-of-the-art translation performance on resource-rich language pairs such as English-French and German-English (Luong et al., 2015; Jean et al., 2015; Wu et al., 2016; Johnson et al., 2016), it still suffers from the unavailability of large-scale parallel corpora for translating low-resource languages. Due to the large parameter space, neural models usually learn poorly from low-count events, resulting in a poor choice for low-resource language pairs. Zoph et ∗Corresponding author: Yang Liu. al. (2016) indicate that NMT obtains much worse translation quality than a statistical machine translation (SMT) system on low-resource languages. As a result, a number of authors have endeavored to explore methods for translating language pairs without parallel corpora available. These methods can be roughly divided into two broad categories: multilingual and pivot-based. Firat et al. (2016b) present a multi-way, multilingual model with shared attention to achieve zeroresource translation. They fine-tune the attention part using pseudo bilingual sentences for the zeroresource language pair. Another direction is to develop a universal NMT model in multilingual scenarios (Johnson et al., 2016; Ha et al., 2016). They use parallel corpora of multiple languages to train one single model, which is then able to translate a language pair without parallel corpora available. Although these approaches prove to be effective, the combination of multiple languages in modeling and training leads to increased complexity compared with standard NMT. Another direction is to achieve source-to-target NMT without parallel data via a pivot, which is either text (Cheng et al., 2016a) or image (Nakayama and Nishida, 2016). Cheng et al. (2016a) propose a pivot-based method for zeroresource NMT: it first translates the source language to a pivot language, which is then translated to the target language. Nakayama and Nishida (2016) show that using multimedia information as pivot also benefits zero-resource translation. However, pivot-based approaches usually need to divide the decoding process into two steps, which is not only more computationally expensive, but also potentially suffers from the error propagation problem (Zhu et al., 2013). In this paper, we propose a new method for zero-resource neural machine translation. Our 1925 (a) (b) X Y Z Z X Y P(z|x; ✓x!z) P(y|z; ✓z!y) P(y|z; ✓z!y) P(y|x; ✓x!y) Figure 1: (a) The pivot-based approach and (b) the teacher-student approach to zero-resource neural machine translation. X, Y, and Z denote source, target, and pivot languages, respectively. We use a dashed line to denote that there is a parallel corpus available for the connected language pair. Solid lines with arrows represent translation directions. The pivot-based approach leverages a pivot to achieve indirect source-to-target translation: it first translates x into z, which is then translated into y. Our training algorithm is based on the translation equivalence assumption: if x is a translation of z, then P(y|x; θx→y) should be close to P(y|z; θz→y). Our approach directly trains the intended source-totarget model P(y|x; θx→y) (“student”) on a source-pivot parallel corpus, with the guidance of an existing pivot-to-target model P(y|z; ˆθz→y) (“teacher”). method assumes that parallel sentences should have close probabilities of generating a sentence in a third language. To train a source-to-target NMT model without parallel corpora (“student”), we leverage an existing pivot-to-target NMT model (“teacher”) to guide the learning process of the student model on a source-pivot parallel corpus. Compared with pivot-based approaches (Cheng et al., 2016a), our method allows direct parameter estimation of the intended NMT model, without the need to divide decoding into two steps. This strategy not only improves efficiency but also avoids error propagation in decoding. Experiments on the Europarl and WMT datasets show that our approach achieves significant improvements in terms of both translation quality and decoding efficiency over a baseline pivot-based approach to zero-resource NMT on Spanish-French and German-French translation tasks. 2 Background Neural machine translation (Sutskever et al., 2014; Bahdanau et al., 2015) advocates the use of neural networks to model the translation process in an end-to-end manner. As a data-driven approach, NMT treats parallel corpora as the major source for acquiring translation knowledge. Let x be a source-language sentence and y be a target-language sentence. We use P(y|x; θx→y) to denote a source-to-target neural translation model, where θx→y is a set of model parameters. Given a source-target parallel corpus Dx,y, which is a set of parallel source-target sentences, the model parameters can be learned by maximizing the log-likelihood of the parallel corpus: ˆθx→y = argmax θx→y ( X ⟨x,y⟩∈Dx,y log P(y|x; θx→y) ) . Given learned model parameters ˆθx→y, the decision rule for finding the translation with the highest probability for a source sentence x is given by ˆy = argmax y ( P(y|x; ˆθx→y) ) . (1) As a data-driven approach, NMT heavily relies on the availability of large-scale parallel corpora to deliver state-of-the-art translation performance (Wu et al., 2016; Johnson et al., 2016). Zoph et al. (2016) report that NMT obtains much lower BLEU scores than SMT if only small-scale parallel corpora are available. Therefore, the heavy dependence on the quantity of training data poses a severe challenge for NMT to translate zeroresource language pairs. Simple and easy-to-implement, pivot-based methods have been widely used in SMT for 1926 translating zero-resource language pairs (de Gispert and Mari˜no, 2006; Cohn and Lapata, 2007; Utiyama and Isahara, 2007; Wu and Wang, 2007; Bertoldi et al., 2008; Wu and Wang, 2009; Zahabi et al., 2013; Kholy et al., 2013). As pivotbased methods are agnostic to model structures, they have been adapted to NMT recently (Cheng et al., 2016a; Johnson et al., 2016). Figure 1(a) illustrates the basic idea of pivotbased approaches to zero-resource NMT (Cheng et al., 2016a). Let X, Y, and Z denote source, target, and pivot languages. We use dashed lines to denote language pairs with parallel corpora available and solid lines with arrows to denote translation directions. Intuitively, the source-to-target translation can be indirectly modeled by bridging two NMT models via a pivot: P(y|x; θx→z, θz→y) = X z P(z|x; θx→z)P(y|z; θz→y). (2) As shown in Figure 1(a), pivot-based approaches assume that the source-pivot parallel corpus Dx,z and the pivot-target parallel corpus Dz,y are available. As it is impractical to enumerate all possible pivot sentences, the two NMT models are trained separately in practice: ˆθx→z = argmax θx→z ( X ⟨x,z⟩∈Dx,z log P(z|x; θx→z) ) , ˆθz→y = argmax θz→y ( X ⟨z,y⟩∈Dz,y log P(y|z; θz→y) ) . Due to the exponential search space of pivot sentences, the decoding process of translating an unseen source sentence x has to be divided into two steps: ˆz = argmax z n P(z|x; ˆθx→z) o , (3) ˆy = argmax y n P(y|ˆz; ˆθz→y) o . (4) The above two-step decoding process potentially suffers from the error propagation problem (Zhu et al., 2013): the translation errors made in the first step (i.e., source-to-pivot translation) will affect the second step (i.e., pivot-to-target translation). Therefore, it is necessary to explore methods to directly model source-to-target translation without parallel corpora available. 3 Approach 3.1 Assumptions In this work, we propose to directly model the intended source-to-target neural translation based on a teacher-student framework. The basic idea is to use a pre-trained pivot-to-target model (“teacher”) to guide the learning process of a source-to-target model (“student”) without training data available on a source-pivot parallel corpus. One advantage of our approach is that Equation (1) can be used as the decision rule for decoding, which avoids the error propagation problem faced by two-step decoding in pivot-based approaches. As shown in Figure 1(b), we still assume that a source-pivot parallel corpus Dx,z and a pivot-target parallel corpus Dz,y are available. Unlike pivot-based approaches, we first use the pivot-target parallel corpus Dz,y to obtain a teacher model P(y|z; ˆθz→y), where ˆθz→y is a set of learned model parameters. Then, the teacher model “teaches” the student model P(y|x; θx→y) on the source-pivot parallel corpus Dx,z based on the following assumptions. Assumption 1 If a source sentence x is a translation of a pivot sentence z, then the probability of generating a target sentence y from x should be close to that from its counterpart z. We can further introduce a word-level assumption: Assumption 2 If a source sentence x is a translation of a pivot sentence z, then the probability of generating a target word y from x should be close to that from its counterpart z, given the already obtained partial translation y<j. The two assumptions are empirically verified in our experiments (see Table 2). In the following subsections, we will introduce two approaches to zero-resource neural machine translation based on the two assumptions. 3.2 Sentence-Level Teaching Given a source-pivot parallel corpus Dx,z, our training objective based on Assumption 1 is defined as follows: JSENT(θx→y) = X ⟨x,z⟩∈Dx,z KL  P(y|z; ˆθz→y) P(y|x; θx→y)  , (5) 1927 where the KL divergence sums over all possible target sentences: KL  P(y|z; ˆθz→y) P(y|x; θx→y)  = X y P(y|z; ˆθz→y) log P(y|z; ˆθz→y) P(y|x; θx→y).(6) As the teacher model parameters are fixed, the training objective can be equivalently written as JSENT(θx→y) = − X ⟨x,z⟩∈Dx,z Ey|z;ˆθz→y h log P(y|x; θx→y) i . (7) In training, our goal is to find a set of source-totarget model parameters that minimizes the training objective: ˆθx→y = argmin θx→y n JSENT(θx→y) o . (8) With learned source-to-target model parameters ˆθx→y, we use the standard decision rule as shown in Equation (1) to find the translation ˆy for a source sentence x. However, a major difficulty faced by our approach is the intractability in calculating the gradients because of the exponential search space of target sentences. To address this problem, it is possible to construct a sub-space by either sampling (Shen et al., 2016), generating a k-best list (Cheng et al., 2016b) or mode approximation (Kim and Rush, 2016). Then, standard stochastic gradient descent algorithms can be used to optimize model parameters. 3.3 Word-Level Teaching Instead of minimizing the KL divergence between the teacher and student models at the sentence level, we further define a training objective at the word level based on Assumption 2: JWORD(θx→y) = X ⟨x,z⟩∈Dx,z Ey|z;ˆθz→y h J(x, y, z, ˆθz→y, θx→y) i , (9) where J(x, y, z, ˆθz→y, θx→y) = |y| X j=1 KL  P(y|z, y<j; ˆθz→y) P(y|x, y<j; θx→y)  . (10) Equation (9) suggests that the teacher model P(y|z, y<j; ˆθz→y) “teaches” the student model P(y|x, y<j; θx→y) in a word-by-word way. Note that the KL-divergence between two models is defined at the word level: KL  P(y|z, y<j; ˆθz→y) P(y|x, y<j; θx→y)  = X y∈Vy P(y|z, y<j; ˆθz→y) log P(y|z, y<j; ˆθz→y) P(y|x, y<j; θx→y), where Vy is the target vocabulary. As the parameters of the teacher model are fixed, the training objective can be equivalently written as: JWORD(θx→y) = − X ⟨x,z⟩∈Dx,z Ey|z; ˆθz→y h S(x, y, z, ˆθz→y, θx→y) i , (11) where S(x, y, z, ˆθz→y, θx→y) = |y| X j=1 X y∈Vy P(y|z, y<j; ˆθz→y) × log P(y|x, y<j; θx→y). (12) Therefore, our goal is to find a set of source-totarget model parameters that minimizes the training objective: ˆθx→y = argmin θx→y n JWORD(θx→y) o . (13) We use similar approaches as described in Section 3.2 for approximating the full search space with sentence-level teaching. After obtaining ˆθx→y, the same decision rule as shown in Equation (1) can be utilized to find the most probable target sentence ˆy for a source sentence x. 4 Experiments 4.1 Setup We evaluate our approach on the Europarl (Koehn, 2005) and WMT corpora. To compare with pivotbased methods, we use the same dataset as (Cheng et al., 2016a). All the sentences are tokenized by the tokenize.perl script. All the experiments treat English as the pivot language and French as the target language. For the Europarl corpus, we evaluate our proposed methods on Spanish-French (Es-Fr) and German-French (De-Fr) translation tasks in a 1928 Corpus Direction Train Dev. Test Europarl Es→En 850K 2,000 2,000 De→En 840K 2,000 2,000 En→Fr 900K 2,000 2,000 WMT Es→En 6.78M 3,003 3,003 En→Fr 9.29M 3,003 3,003 Table 1: Data statistics. For the Europarl corpus, we evaluate our approach on Spanish-French (EsFr) and German-French (De-Fr) translation tasks. For the WMT corpus, we evaluate approach on the Spanish-French (Es-Fr) translation task. English is used as a pivot language in all experiments. zero-resource scenario. To avoid the trilingual corpus constituted by the source-pivot and pivottarget corpora, we split the overlapping pivot sentences of the original source-pivot and pivot-target corpora into two equal parts and merge them separately with the non-overlapping parts for each language pair. The development and test sets are from WMT 2006 shared task.1 The evaluation metric is case-insensitive BLEU (Papineni et al., 2002) as calculated by the multi-bleu.perl script. To deal with out-of-vocabulary words, we adopt byte pair encoding (BPE) (Sennrich et al., 2016) to split words into sub-words. The size of sub-words is set to 30K for each language. For the WMT corpus, we evaluate our approach on a Spanish-French (Es-Fr) translation task with a zero-resource setting. We combine the following corpora to form the Es-En and En-Fr parallel corpora: Common Crawl, News Commentary, Europarl v7 and UN. All the sentences are tokenized by the tokenize.perl script. Newstest2011 serves as the development set and Newstest2012 and Newstest2013 serve as test sets. We use case-sensitive BLEU to evaluate translation results. BPE is also used to reduce the vocabulary size. The size of sub-words is set to 43K, 33K, 43K for Spanish, English and French, respectively. See Table 1 for detailed statistics for the Europarl and WMT corpora. We leverage an open-source NMT toolkit dl4mt implemented by Theano 2 for all the experiments and compare our approach with state-of-the-art multilingual methods (Firat et al., 2016b) and pivot-based methods (Cheng et al., 2016a). Two variations of our framework are used in the exper1http://www.statmt.org/wmt07/shared-task.html 2dl4mt-tutorial: https://github.com/nyu-dl iments: 1. Sentence-Level Teaching: for simplicity, we use the mode as suggested in (Kim and Rush, 2016) to approximate the target sentence space in calculating the expected gradients with respect to the expectation in Equation (7). We run beam search on the pivot sentence with the teacher model and choose the highest-scoring target sentence as the mode. Beam size with k = 1 (greedy decoding) and k = 5 are investigated in our experiments, denoted as sent-greedy and sent-beam, respectively.3 2. Word-Level Teaching: we use the same mode approximation approach as in sentence-level teaching to approximate the expectation in Equation 12, denoted as word-greedy (beam search with k = 1) and word-beam (beam search with k = 5) respectively. Besides, Monte Carlo estimation by sampling from the teacher model is also investigated since it introduces more diverse data, denoted as wordsampling. 4.2 Assumptions Verification To verify the assumptions in Section 3.1, we train a source-to-target translation model P(y|x; θx→y) and a pivot-to-target translation model P(y|z; θz→y) using the trilingual Europarl corpus. Then, we measure the sentence-level and word-level KL divergence from the source-totarget model P(y|x; θx→y) at different iterations to the trained pivot-to-target model P(y|z; ˆθz→y) by caculating JSENT (Equation (5)) and JWORD 3We can also adopt sampling and k-best list for approximation. Random sampling brings a large variance (Sutskever et al., 2014; Ranzato et al., 2015; He et al., 2016) for sentence-level teaching. For k-best list, we renormalize the probabilities P(y|z; ˆθz→y) ∼ P(y|z; ˆθz→y)α P y∈Yk P(y|z; ˆθz→y)α , where Yk is the k-best list from beam search of the teacher model and α is a hyperparameter controling the sharpness of the distribution (Och, 2003). We set k = 5 and α = 5×10−3. The results on test set for Eureparl Corpus are 32.24 BLEU over Spanish-French translation and 24.91 BLEU over German-French translation, which are slightly better than the sent-beam method. However, considering the traing time and the memory consumption, we think mode approximation is already a good way to approximate the target sentence space for sentence-level teaching. 1929 Approx. Iterations 0 2w 4w 6w 8w JSENT greedy 313.0 73.1 61.5 56.8 55.1 beam 323.5 73.1 60.7 55.4 54.0 JWORD greedy 274.0 51.5 43.1 39.4 38.8 beam 288.7 52.7 43.3 39.2 38.4 sampling 268.6 53.8 46.6 42.8 42.4 Table 2: Verification of sentence-level and word-level assumptions by evaluating approximated KL divergence from the source-to-target model to the pivot-to-target model over training iterations of the source-to-target model. The pivot-to-target model is trained and kept fixed. Method Es→Fr De→Fr Cheng et al. (2016a) pivot 29.79 23.70 hard 29.93 23.88 soft 30.57 23.79 likelihood 32.59 25.93 Ours sent-beam 31.64 24.39 word-sampling 33.86 27.03 Table 3: Comparison with previous work on Spanish-French and German-French translation tasks from the Europarl corpus. English is treated as the pivot language. The likelihood method uses 100K parallel source-target sentences, which are not available for other methods. (Equation (9)) on 2,000 parallel source-pivot sentences from the development set of WMT 2006 shared task. Table 2 shows the results. The source-to-target model is randomly initialized at iteration 0. We find that JSENT and JWORD decrease over time, suggesting that the source-to-target and pivot-totarget models do have small KL divergence at both sentence and word levels. 4.3 Results on the Europarl Corpus Table 3 gives BLEU scores on the Europarl corpus of our best performing sentence-level method (sent-beam) and word-level method (word-sampling) compared with pivot-based methods (Cheng et al., 2016a). We use the same data preprocessing as (Cheng et al., 2016a). We find that both the sent-beam and word-sampling methods outperform the pivot-based approaches in a zero-resource scenario across language pairs. Our word-sampling method improves over the best performing zero-resource pivot-based method (soft) on Spanish-French translation by +3.29 BLEU points and German-French translation by +3.24 BLEU points. In addition, the word-sampling mothod surprisingly obtains improvement over the likelihood method, which leverages a source-target parallel corpus. The Method Es→Fr De→Fr dev test dev test sent-greedy 31.00 31.05 22.34 21.88 sent-beam 31.57 31.64 24.95 24.39 word-greedy 31.37 31.92 24.72 25.15 word-beam 30.81 31.21 24.64 24.19 word-sampling 33.65 33.86 26.99 27.03 Table 4: Comparison of our proposed methods on Spanish-French and German-French translation tasks from the Europarl corpus. English is treated as the pivot language. significant improvements can be explained by the error propagation problem of pivot-based methods that translation error of the source-to-pivot translation process is propagated to the pivot-to-target translation process. Table 4 shows BLEU scores on the Europarl corpus of our proposed methods. For sentencelevel approaches, the sent-beam method outperforms the sent-greedy method by +0.59 BLEU points over Spanish-French translation and +2.51 BLEU points over German-French translation on the test set. The results are in line with our observation in Table 2 that sentence-level KL divergence by beam approximation is smaller than that by greedy approximation. However, as the 1930 0 3 6 9 12 15 30 60 90 120 150 180 210 0 3 6 9 12 15 0 5 10 15 20 25 30 Valid Loss Iterations sent-greedy sent-beam word-greedy word-beam word-sampling ×10 4 ×10 4 BLEU Iterations sent-greedy sent-beam word-greedy word-beam word-sampling Figure 2: Validation loss and BLEU across iterations of our proposed methods. Method Training BLEU Es→En En→Fr Es→Fr Newstest2012 Newstest2013 Existing zero-resource NMT systems Cheng et al. (2016a)† pivot 6.78M 9.29M 24.60 Cheng et al. (2016a)† likelihood 6.78M 9.29M 100K 25.78 Firat et al. (2016b) one-to-one 34.71M 65.77M 17.59 17.61 Firat et al. (2016b)† many-to-one 34.71M 65.77M 21.33 21.19 Our zero-resource NMT system word-sampling 6.78M 9.29M 28.06 27.03 Table 5: Comparison with previous work on Spanish-French translation in a zero-resource scenario over the WMT corpus. The BLEU scores are case sensitive. †: the method depends on two-step decoding. time complexity grows linearly with the number of beams k, the better performance is achieved at the expense of search time. For word-level experiments, we observe that the word-sampling method performs much better than the other two methods: +1.94 BLEU points on Spanish-French translation and +1.88 BLEU points on German-French translation over the word-greedy method; +2.65 BLEU points on Spanish-French translation and +2.84 BLEU points on German-French translation over the word-beam method. Although Table 2 shows that word-level KL divergence approximated by sampling is larger than that by greedy or beam, sampling approximation introduces more data diversity for training, which dominates the effect of KL divergence difference. We plot validation loss4 and BLEU scores over iterations on the German-French translation task in Figure 2. We observe that word-level models 4Validation loss: the average negative log-likelihood of sentence pairs on the validation set. tend to have lower validation loss compared with sentence-level methods. Generally, models with lower validation loss tend to have higher BLEU. Our results indicate that this is not necessarily the case: the sent-beam method converges to +0.31 BLEU points on the validation set with +13 validation loss compared with the word-beam method. Kim and Rush (2016) claim a similar observation in data distillation for NMT and provide an explanation that student distributions are more peaked for sentence-level methods. This is indeed the case in our result: on German-French translation task the argmax for the sent-beam student model (on average) approximately accounts for 3.49% of the total probability mass, while the corresponding number is 1.25% for the word-beam student model and 2.60% for the teacher model. 4.4 Results on the WMT Corpus The word-sampling method obtains the best performance in our five proposed approaches according to experiments on the Europarl corpus. To further verify this approach, we conduct ex1931 groundtruth source Os sent´ais al volante en la costa oeste , en San Francisco , y vuestra misi´on es llegar los primeros a Nueva York . pivot You get in the car on the west coast , in San Francisco , and your task is to be the first one to reach New York . target Vous vous asseyez derri`ere le volant sur la cˆote ouest `a San Francisco et votre mission est d&apos; arriver le premier `a New York . pivot pivot You &apos;ll feel at the west coast in San Francisco , and your mission is to get the first to New York . [BLEU: 33.93] target Vous vous sentirez comme chez vous `a San Francisco , et votre mission est d&apos; obtenir le premier `a New York . [BLEU: 44.52] likelihood pivot You feel at the west coast , in San Francisco , and your mission is to reach the first to New York . [BLEU: 47.22] target Vous vous sentez `a la cˆote ouest , `a San Francisco , et votre mission est d&apos; atteindre le premier `a New York . [BLEU: 49.44] word-sampling target Vous vous sentez au volant sur la cˆote ouest , `a San Francisco et votre mission est d&apos; arriver le premier `a New York . [BLEU: 78.78] Table 6: Examples and corresponding sentence BLEU scores of translations using the pivot and likelihood methods in (Cheng et al., 2016a) and the proposed word-sampling method. We observe that our approach generates better translations than the methods in (Cheng et al., 2016a). We italicize correct translation segments which are no short than 2-grams. periments on the large scale WMT corpus for Spanish-French translation. Table 5 shows the results of our word-sampling method in comparison with other state-of-the-art baselines. Cheng et al. (2016a) use the same datasets and the same preprocessing as ours. Firat et al. (2016b) utilize a much larger training set.5 Our method obtains significant improvement over the pivot baseline by +3.46 BLEU points on Newstest2012 and over many-to-one by +5.84 BLEU points on Newstest2013. Note that both methods depend on a source-pivot-target decoding path. Table 6 shows translation examples of the pivot and likelihood methods proposed in (Cheng et al., 2016a) and our proposed word-sampling method. For the pivot and likelihood methods, the Spainish sentence segment ’sent´ais al volante’ is lost when translated to English. Therefore, both methods miss this information in the translated French sentence. However, the word-sampling method generates ’volant sur’, which partially translates ’sent´ais al volante’, resulting in improved translation quality of targetlanguage sentence. 4.5 Results with Small Source-Pivot Data The word-sampling method can also be applied to zero-resource NMT with a small source-pivot corpus. Specifically, the size of the source-pivot corpus is orders of magnitude smaller than that of the pivot-target corpus. This setting makes sense in applications. For example, there are significantly fewer Urdu-English corpora available than 5Their training set does not include the Common Crawl corpus. Method Corpus BLEU De-En De-Fr En-Fr MLE × √ × 19.30 transfer × √ √ 22.39 pivot √ × √ 17.32 Ours √ × √ 22.95 Table 7: Comparison on German-French translation task from the Europarl corpus with 100K German-English sentences. English is regarded as the pivot language. Transfer represents the transfer learning method in (Zoph et al., 2016). 100K parallel German-French sentences are used for the MLE and transfer methods. English-French corpora. To fulfill this task, we combine our best performing word-sampling method with the initialization and parameter freezing strategy proposed in (Zoph et al., 2016). The Europarl corpus is used in the experiments. We set the size of GermanEnglish training data to 100K and use the same teacher model trained with 900K English-French sentences. Table 7 gives the BLEU score of our method on German-French translation compared with three other methods. Note that our task is much harder than transfer learning (Zoph et al., 2016) since it depends on a parallel German-French corpus. Surprisingly, our method outperforms all other methods. We significantly improve the baseline pivot method by +5.63 BLEU points and the state-ofthe-art transfer learning method by +0.56 BLEU points. 1932 5 Related Work Training NMT models in a zero-resource scenario by leveraging other languages has attracted intensive attention in recent years. Firat et al. (2016b) propose an approach which delivers the multi-way, multilingual NMT model proposed by (Firat et al., 2016a) to zero-resource translation. They use the multi-way NMT model trained by other language pairs to generate a pseudo parallel corpus and fine-tune the attention mechanism of the multiway NMT model to enable zero-resource translation. Several authors propose a universal encoderdecoder network in multilingual scenarios to perform zero-shot learning (Johnson et al., 2016; Ha et al., 2016). This universal model extracts translation knowledge from multiple different languages, making zero-resource translation feasible without direct training. Besides multilingual NMT, another important line of research is bridging source and target languages via a pivot language. This idea is widely used in SMT (de Gispert and Mari˜no, 2006; Cohn and Lapata, 2007; Utiyama and Isahara, 2007; Wu and Wang, 2007; Bertoldi et al., 2008; Wu and Wang, 2009; Zahabi et al., 2013; Kholy et al., 2013). Cheng et al. (2016a) propose pivotbased NMT by simultaneously improving sourceto-pivot and pivot-to-target translation quality in order to improve source-to-target translation quality. Nakayama and Nishida (2016) achieve zeroresource machine translation by utilizing image as a pivot and training multimodal encoders to share common semantic representation. Our work is also related to knowledge distillation, which trains a compact model to approximate the function learned by a larger, more complex model or an ensemble of models (Bucila et al., 2006; Ba and Caurana, 2014; Li et al., 2014; Hinton et al., 2015). Kim and Rush (2016) first introduce knowledge distillation in neural machine translation. They suggest to generate a pseudo corpus to train the student network. Compared with their work, we focus on zero-resource learning instead of model compression. 6 Conclusion In this paper, we propose a novel framework to train the student model without parallel corpora under the guidance of the pre-trained teacher model on a source-pivot parallel corpus. We introduce sentence-level and word-level teaching to guide the learning process of the student model. Experiments on the Europarl and WMT corpora across languages show that our proposed wordlevel sampling method can significantly outperforms the state-of-the-art pivot-based methods and multilingual methods in terms of translation quality and decoding efficiency. We also analyze zero-resource translation with small source-pivot data, and combine our wordlevel sampling method with initialization and parameter freezing suggested by (Zoph et al., 2016). The experiments on the Europarl corpus show that our approach obtains an significant improvement over the pivot-based baseline. In the future, we plan to test our approach on more diverse language pairs, e.g., zero-resource Uyghur-English translation using Chinese as a pivot. It is also interesting to extend the teacherstudent framework to other cross-lingual NLP applications as our method is transparent to architectures. Acknowledgments This work was done while Yun Chen is visiting Tsinghua University. This work is partially supported by the National Natural Science Foundation of China (No.61522204, No. 61331013) and the 863 Program (2015AA015407). References Jimmy Ba and Rich Caurana. 2014. Do deep nets really need to be deep? In NIPS. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR . Nicola Bertoldi, Madalina Barbaiani, Marcello Federico, and Roldano Cattoni. 2008. Phrase-based statistical machine translation with pivot languages. In IWSLT. Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In KDD. Yong Cheng, Yang Liu, Qian Yang, Maosong Sun, and Wei Xu. 2016a. Neural machine translation with pivot languages. CoRR abs/1611.04928. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016b. Semisupervised learning for neural machine translation . 1933 Trevor Cohn and Mirella Lapata. 2007. Machine translation by triangulation: Making effective use of multi-parallel corpora. In ACL. Adri`a de Gispert and Jos´e B. Mari˜no. 2006. Catalanenglish statistical machine translation without parallel corpus: bridging through spanish. In Proceedings of 5th International Conference on Language Resources and Evaluation (LREC). Citeseer, pages 65–68. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016a. Multi-way, multilingual neural machine translation with a shared attention mechanism. In HLT-NAACL. Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman-Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neural machine translation. In EMNLP. Thanh-Le Ha, Jan Niehues, and Alexander H. Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. CoRR abs/1611.04798. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In NIPS. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR abs/1503.02531. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In ACL. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Vi´egas, Martin Wattenberg, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation. CoRR abs/1611.04558. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP. Ahmed El Kholy, Nizar Habash, Gregor Leusch, Evgeny Matusov, and Hassan Sawaf. 2013. Language independent connectivity strength features for phrase pivot statistical machine translation. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In EMNLP. Philipp Koehn. 2005. Europarl: a parallel corpus for statistical machine translation. Jinyu Li, Rui Zhao, Jui-Ting Huang, and Yifan Gong. 2014. Learning small-size dnn with outputdistribution-based criteria. In INTERSPEECH. Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In ACL. Hideki Nakayama and Noriki Nishida. 2016. Zeroresource machine translation by multimodal encoder-decoder network with multimedia pivot. CoRR abs/1611.04503. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In ACL. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. CoRR abs/1511.06732. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units . Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation . Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks . Masao Utiyama and Hitoshi Isahara. 2007. A comparison of pivot methods for phrase-based statistical machine translation. In HLT-NAACL. Hua Wu and Haifeng Wang. 2007. Pivot language approach for phrase-based statistical machine translation. Machine Translation 21:165–181. Hua Wu and Haifeng Wang. 2009. Revisiting pivot language approach for machine translation. In ACL/IJCNLP. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. Samira Tofighi Zahabi, Somayeh Bakhshaei, and Shahram Khadivi. 2013. Using context vectors in improving a machine translation system with bridge language. In ACL. 1934 Xiaoning Zhu, Zhongjun He, Hua Wu, Haifeng Wang, Conghui Zhu, and Tiejun Zhao. 2013. Improving pivot-based statistical machine translation using random walk. In EMNLP. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In EMNLP. 1935
2017
176
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1936–1945 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1177 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1936–1945 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1177 Improved Neural Machine Translation with a Syntax-Aware Encoder and Decoder Huadong Chen†, Shujian Huang†∗, David Chiang‡, Jiajun Chen† †State Key Laboratory for Novel Software Technology, Nanjing University {chenhd,huangsj,chenjj}@nlp.nju.edu.cn ‡Department of Computer Science and Engineering, University of Notre Dame [email protected] Abstract Most neural machine translation (NMT) models are based on the sequential encoder-decoder framework, which makes no use of syntactic information. In this paper, we improve this model by explicitly incorporating source-side syntactic trees. More specifically, we propose (1) a bidirectional tree encoder which learns both sequential and tree structured representations; (2) a tree-coverage model that lets the attention depend on the source-side syntax. Experiments on Chinese-English translation demonstrate that our proposed models outperform the sequential attentional model as well as a stronger baseline with a bottom-up tree encoder and word coverage.1 1 Introduction Recently, neural machine translation (NMT) models (Sutskever et al., 2014; Bahdanau et al., 2015) have obtained state-of-the-art performance on many language pairs. Their success depends on the representation they use to bridge the source and target language sentences. However, this representation, a sequence of fixed-dimensional vectors, differs considerably from most theories about mental representations of sentences, and from traditional natural language processing pipelines, in which semantics is built up compositionally using a recursive syntactic structure. Perhaps as evidence of this, current NMT models still suffer from syntactic errors such as attachment (Shi et al., 2016). We argue that instead of letting the NMT model rely solely on the implicit structure it learns during training (Cho et al., ∗Corresponding author. 1Our code is publicly available at https://github. com/howardchenhd/Syntax-awared-NMT/ (a) example sentence pair with alignments aozhou x1 chongxin x2 kaifang x3 zhu x4 manila x5 dashiguan x6 (b) binarized source side tree Figure 1: An example sentence pair (a), with its binarized source side tree (b). We use xi to represent the i-th word in the source sentence. We will use this sentence pairs as the running example throughout this paper. 2014a), we can improve its performance by augmenting it with explicit structural information and using this information throughout the model. This has two benefits. First, the explicit syntactic information will help the encoder generate better source side representations. Li et al. (2015) show that for tasks in which long-distance semantic dependencies matter, representations learned from recursive models using syntactic structures may be more powerful than those from sequential recurrent models. In the NMT case, given syntactic information, it will be easier for the encoder to incorporate long distance dependencies into better representations, which is especially important for the translation of long sentences. Second, it becomes possible for the decoder to 1936 use syntactic information to guide its reordering decisions better (especially for language pairs with significant reordering, like Chinese-English). Although the attention model (Bahdanau et al., 2015) and the coverage model (Tu et al., 2016; Mi et al., 2016) provide effective mechanisms to control the generation of translation, these mechanisms work at the word level and cannot capture phrasal cohesion between the two languages (Fox, 2002; Kim et al., 2017). With explicit syntactic structure, the decoder can generate the translation more in line with the source syntactic structure. For example, when translating the phrase zhu manila dashiguan in Figure 1, the tree structure indicates that zhu ‘in’ and manila form a syntactic unit, so that the model can avoid breaking this unit up to make an incorrect translation like “in embassy of manila” 2. In this paper, we propose a novel encoderdecoder model that makes use of a precomputed source-side syntactic tree in both the encoder and decoder. In the encoder (§3.3), we improve the tree encoder of Eriguchi et al. (2016) by introducing a bidirectional tree encoder. For each source tree node (including the source words), we generate a representation containing information both from below (as with the original bottom-up encoder) and from above (using a top-down encoder). Thus, the annotation of each node summarizes the surrounding sequential context, as well as the entire syntactic context. In the decoder (§3.4), we incorporate source syntactic tree structure into the attention model via an extension of the coverage model of Tu et al. (2016). With this tree-coverage model, we can better guide the generation phase of translation, for example, to learn a preference for phrasal cohesion (Fox, 2002). Moreover, with a tree encoder, the decoder may try to translate both a parent and a child node, even though they overlap; the treecoverage model enables the decoder to learn to avoid this problem. To demonstrate the effectiveness of the proposed model, we carry out experiments on Chinese-English translation. Our experiments show that: (1) our bidirectional tree encoder based NMT system achieves significant improvements over the standard attention-based NMT system, and (2) incorporating source tree structure into the attention model yields a further improvement. 2According to the source sentence, “embassy” belongs to “australia”, not “manila”. x1 x2 x3 x4 x5 x6 −→h 1 −→h 2 −→h 3 −→h 4 −→h 5 −→h 6 ←−h 1 ←−h 2 ←−h 3 ←−h 4 ←−h 5 ←−h 6 Figure 2: Illustration of the bidirectional sequential encoder. The dashed rectangle represents the annotation of word xi. In all, we demonstrate an improvement of +3.54 BLEU over a standard attentional NMT system, and +1.90 BLEU over a stronger NMT system with a Tree-LSTM encoder (Eriguchi et al., 2016) and a coverage model (Tu et al., 2016). To the best of our knowledge, this is the first work that uses source-side syntax in both the encoder and decoder of an NMT system. 2 Neural Machine Translation Most NMT systems follow the encoder-decoder framework with attention, first proposed by Bahdanau et al. (2015). Given a source sentence x = x1 · · · xi · · · xI and a target sentence y = y1 · · · yj · · · yJ, NMT aims to directly model the translation probability: P(y | x; θ) = J Y 1 P(yj | y<j, x; θ), (1) where θ is a set of parameters and y< j is the sequence of previously generated target words. Here, we briefly describe the underlying framework of the encoder-decoder NMT system. 2.1 Encoder Model Following Bahdanau et al. (2015), we use a bidirectional gated recurrent unit (GRU) (Cho et al., 2014b) to encode the source sentence, so that the annotation of each word contains a summary of both the preceding and following words. The bidirectional GRU consists of a forward and a backward GRU, as shown in Figure 2. The forward GRU reads the source sentence from left to right and calculates a sequence of forward hidden states (−→ h1, . . . , −→ hI). The backward GRU scans the source sentence from right to left, resulting in a sequence of backward hidden states (←− h1, . . . , ←− hI). Thus −→hi = GRU(−−→ hi−1, si) ←−hi = GRU(←−− hi−1, si) (2) 1937 where si is the i-th source word’s word embedding, and GRU is a gated recurrent unit; see the paper by Cho et al. (2014b) for a definition. The annotation of each source word xi is obtained by concatenating the forward and backward hidden states: ←→ hi =  −→hi←−hi . The whole sequence of these annotations is used by the decoder. 2.2 Decoder Model The decoder is a forward GRU predicting the translation y word by word. The probability of generating the j-th word yj is: P(yj | y<j, x; θ) = softmax(t j−1, dj, cj) (3) where t j−1 is the word embedding of the ( j −1)th target word, dj is the decoder’s hidden state of time j, and cj is the context vector at time j. The state dj is computed as dj = GRU(dj−1, t j−1, cj), (4) where GRU(·) is extended to more than two arguments by first concatenating all arguments except the first. The attention mechanism computes the context vector ci as a weighted sum of the source annotations, c j = IX i=1 α j,i ←→ hi (5) where the attention weight α j,i is αj,i = exp (ej,i) PI i′=1 exp (ej,i′) (6) and ej,i = vT a tanh (Wadj−1 + Ua ←→ hi ) (7) where va, Wa and Ua are the weight matrices of the attention model, and ej,i is an attention model that scores how well dj−1 and ←→ hi match. With this strategy, the decoder can attend to the source annotations that are most relevant at a given time. 3 Tree Structure Enhanced Neural Machine Translation Although syntax has shown its effectiveness in non-neural statistical machine translation (SMT) systems (Yamada and Knight, 2001; Koehn et al., 2003; Liu et al., 2006; Chiang, 2007), most proposed NMT models (a notable exception being that of Eriguchi et al. (2016)) process a sentence only as a sequence of words, and do not explicitly exploit the inherent structure of natural language sentences. In this section, we present models which directly incorporate source syntactic trees into the encoder-decoder framework. 3.1 Preliminaries Like Eriguchi et al. (2016), we currently focus on source side syntactic trees, which can be computed prior to translation. Whereas Eriguchi et al. (2016) use HPSG trees, we use phrase-structure trees as in the Penn Chinese Treebank (Xue et al., 2005). Currently, we are only using the structure information from the tree without the syntactic labels. Thus our approach should be applicable to any syntactic grammar that provides such a tree structure (Figure 1(b)). More formally, the encoder is given a source sentence x = x1 · · · xI as well as a source tree whose leaves are labeled x1, . . . , xI. We assume that this tree is strictly binary branching. For convenience, each node is assigned an index. The leaf nodes get indices 1, . . . , I, which is the same as their word indices. For any node with index k, let p(k) denote the index of the node’s parent (if it exists), and L(k) and R(k) denote the indices of the node’s left and right children (if they exist). 3.2 Tree-GRU Encoder We first describe tree encoders (Tai et al., 2015; Eriguchi et al., 2016), and then discuss our improvements. Following Eriguchi et al. (2016), we build a tree encoder on top of the sequential encoder (as shown in Figure 3(a)). If node k is a leaf node, its hidden state is the annotation produced by the sequential encoder: h↑ k = ←→ hk . Thus, the encoder is able to capture both sequential context and syntactic context. If node k is an interior node, its hidden state is the combination of its previously calculated left 1938 child hidden state hL(k) and right child hidden state hR(k): h↑ k = f(h↑ L(k), h↑ R(k)) (8) where f(·) is a nonlinear function, originally a Tree-LSTM (Tai et al., 2015; Eriguchi et al., 2016). The first improvement we make to the above tree encoder is that, to be consistent with the sequential encoder model, we use Tree-GRU units instead of Tree-LSTM units. Similar to TreeLSTMs, the Tree-GRU has gating mechanisms to control the information flow inside the unit for every node without separate memory cells. Then, Eq. 8 is calculated by a Tree-GRU as follows: rL = σ(U(rL) L h↑ L(k) + U(rL) R h↑ R(k) + b(rL)) rR = σ(U(rR) L h↑ L(k) + U(rR) R h↑ R(k) + b(rR)) zL = σ(U(zL) L h↑ L(k) + U(zL) R h↑ R(k) + b(zL)) zR = σ(U(zR) L h↑ L(k) + U(zR) R h↑ R(k) + b(zR)) z = σ(U(z) L h↑ L(k) + U(z) R h↑ R(k) + b(z)) ˜h↑ k = tanh  UL(rL ⊙h↑ L(k)) + UR(rR ⊙h↑ R(k))  h↑ k = zL ⊙h↑ L(k) + zR ⊙h↑ R(k) + z ⊙˜h↑ k where rL, rR are the reset gates and zL, zR are the update gates for the left and right children, and z is the update gate for the internal hidden state ˜h↑ k. The U(·) and b(·) are the weight matrices and bias vectors. 3.3 Bidirectional Tree Encoder Although the bottom-up tree encoder can take advantage of syntactic structure, the learned representation of a node is based on its subtree only; it contains no information from higher up in the tree. In particular, the representation of leaf nodes is still the sequential one. Thus no syntactic information is fed into words. By analogy with the bidirectional sequential encoder, we propose a natural extension of the bottom-up tree encoder: the bidirectional tree encoder (Figure 3(b)). Unlike the bottom-up tree encoder or the rightto-left sequential encoder, the top-down encoder by itself would have no lexical information as input. To address this issue, we feed the hidden states of the bottom-up encoder to the top-down encoder. In this way, the information of the whole syntactic tree is handed to the root node and propagated to its offspring by the top-down encoder. x1 x2 x3 x4 x5 x6 −→h 1 −→h 2 −→h 3 −→h 4 −→h 5 −→h 6 −→h 1 ←−h 2 ←−h 3 ←−h 4 ←−h 5 ←−h 6 h↑ 7 h↑ 8 h↑ 9 h↑ 10 h↑ 11 (a) Tree-GRU Encoder x1 x2 x3 x4 x5 x6 h↑ 1 h↑ 2 h↑ 3 h↑ 4 h↑ 5 h↑ 6 h↓ 1 h↓ 2 h↓ 3 h↓ 4 h↓ 5 h↓ 6 h↑ 7 h↓ 7 h↑ 8 h↓ 8 h↑ 9 h↓ 9 h↑ 10 h↓ 10 h↑ 11 h↓ 11 (b) Bidirectional Tree Encoder Figure 3: Illustration of the proposed encoder models for the running example. The non-leaf nodes are assigned with index 7-11. The annotations h↑ i of leaf nodes in (b) are identical to the annotations (dashed rectangles) of leaf nodes in (a). The dotted rectangles in (b) indicate the annotation produced by the bidirectional tree encoder. In the top-down encoder, each hidden state has only one predecessor. In fact, the top-down path from root of a tree to any node can be viewed as a sequential recurrent neural network. We can calculate the hidden states of each node top-down using a standard sequential GRU. First, the hidden state of the root node ρ is simply computed as follows: h↓ ρ = tanh (Wh↑ ρ + b) (9) where W and b are a weight matrix and bias vector. Then, other nodes are calculated by a GRU. For hidden state h↓ k: h↓ k = GRU(h↓ p(k), h↑ k) (10) 1939 where p(k) is the parent index of k. We replace the weight matrices Wr, Ur, Wz, Uz, W and U in the standard GRU with Pr D, Qr D, Pz D, Qz D, PD, and QD, respectively. The subscript D is either L or R depending on whether node k is a left or right child, respectively. Finally, the annotation of each node is obtained by concatenating its bottom-up hidden state and top-down hidden state: h↕ k =  h↑ k h↓ k . This allows the tree structure information flow from the root to the leaves (words). Thus, all the annotations are based on the full context of word sequence and syntactic tree structure. Kokkinos and Potamianos (2017) propose a similar bidirectional Tree-GRU for sentiment analysis, which differs from ours in several respects: in the bottom-up encoder, we use separate reset/update gates for left and right children, analogous to Tree-LSTMs (Tai et al., 2015); in the topdown encoder, we use separate weights for left and right children. Teng and Zhang (2016) also propose a bidirectional Tree-LSTM encoder for classification tasks. They use a more complex head-lexicalization scheme to feed the top-down encoder. We will compare their model with ours in the experiments. 3.4 Tree-Coverage Model We also extend the decoder to incorporate information about the source syntax into the attention model. We have observed two issues in translations produced using the tree encoder. First, a syntactic phrase in the source sentence is often incorrectly translated into discontinuous words in the output. Second, since the non-leaf node annotations contain more information than the leaf node annotations, the attention model prefers to attend to the non-leaf nodes, which may aggravate the over-translation problem (translating the same part of the sentence more than once). As shown in Figure 4(a), almost all the non-leaf nodes are attended too many times during decoding. As a result, the Chinese phrase zhu manila is translated twice because the model attends to the node spanning zhu manila even though both words have already been translated; there is no mechanism to prevent this. (a) Tree-GRU Encoder (b) + Tree-Coverage Model Figure 4: The attention heapmap plotting the attention weights during different translation steps, for translating the sentence in Figure 1(a). The nodes [7]-[11] correspond to non-leaf nodes indexed in Figure 3. Incorporating Tree-Coverage Model produces more concentrated alignments and alleviates the over-translation problem. Inspired by the approaches of Cohn et al. (2016), Feng et al. (2016), Tu et al. (2016) and Mi et al. (2016), we propose to use prior knowledge to control the attention mechanism. In our case, the prior knowledge is the source syntactic information. In particular, we build our model on top of the word coverage model proposed by Tu et al. (2016), which alleviate the problems of over-translation and under-translation (failing to translate part of a sentence). The word coverage model makes the attention at a given time step j dependent on the attention at previous time steps via coverage vectors: C j,i = GRU(C j−1,i, αj,i, dj−1, hi). (11) 1940 The coverage vectors are, in turn, used to update the attention at the next time step, by a small modification to the calculation of ej,i in Eq. (7): e j,i = vT a tanh (Wadj−1 + Uahi + VaC j−1,i). (12) The word coverage model could be interpreted as a control mechanism for the attention model. Like the standard attention model, this coverage model sees the source-sentence annotations as a bag of vectors; it knows nothing about word order, still less about syntactic structure. For our model, we extend the word coverage model to coverage on the tree structure by adding a coverage vector for each node in the tree. We further incorporate source tree structure information into the calculation of the coverage vector by requiring each node’s coverage vector to depend on its children’s coverage vectors and attentions at the previous time step: C j,i = GRU(C j−1,i, αj,i, dj−1, hi, C j−1,L(i), αj,L(i), C j−1,R(i), αj,R(i)). (13) Although both child and parent nodes of a subtree are helpful for translation, they may supply redundant information. With our mechanism, when the child node is used to produce a translation, the coverage vector of its parent node will reflect this fact, so that the decoder may avoid using the redundant information in the parent node. Figure 4(b) shows a heatmap of the attention of our tree structure enhanced attention model. The attention of non-leaf nodes becomes more concentrated and the over-translation of zhu manila is corrected. 4 Experiments 4.1 Data We conduct experiments on the NIST ChineseEnglish translation task. The parallel training data consists of 1.6M sentence pairs extracted from LDC corpora,3 with 46.6M Chinese words and 52.5M English words, respectively. We use NIST MT02 as development data, and NIST MT03–06 as test data. These data are mostly in the same genre (newswire), avoiding the extra consideration of domain adaptation. Table 1 shows the statistics of the data sets. The Chinese side of the corpora is word segmented using ICTCLAS.4 We 3LDC2002E18, LDC2003E14, the Hansards portion of LDC2004T08, and LDC2005T06. 4http://ictclas.nlpir.org Data Usage Sents. LDC train 1.6M MT02 dev 878 MT03 test 919 MT04 test 1,597 MT05 test 1,082 MT06 test 1,664 Table 1: Experiment data and statistics. parse the Chinese sentences with the Berkeley Parser5 (Petrov and Klein, 2007) and binarize the resulting trees following Zhang and Clark (2009). The English side of the corpora is lowercased and tokenized. We filter out any translation pairs whose source sentences fail to be parsed. For efficient training, we also filter out the sentence pairs whose source or target lengths are longer than 50. We use a shortlist of the 30,000 most frequent words in each language to train our models, covering approximately 98.2% and 99.5% of the Chinese and English tokens, respectively. All out-of-vocabulary words are mapped to a special symbol UNK. 4.2 Model and Training Details We compare our proposed models with several state-of-the-art NMT systems and techniques: • NMT: the standard attentional NMT model (Bahdanau et al., 2015). • Tree-LSTM: the attentional NMT model extended with the Tree-LSTM encoder (Eriguchi et al., 2016). • Coverage: the attentional NMT model extended with word coverage (Tu et al., 2016). We used the dl4mt implementation of the attentional model,6 reimplementing the tree encoder and word coverage models. The word embedding dimension is 512. The hidden layer sizes of both forward and backward sequential encoder are 1024 (except where indicated). Since our TreeGRU encoders are built on top of the bidirectional sequential encoder, the size of the hidden layer (in each direction) is 2048. For the coverage model, we set the size of coverage vectors to 50. 5https://github.com/slavpetrov/ berkeleyparser 6https://github.com/nyu-dl/dl4mt-tutorial 1941 # Encoder Coverage MT02 MT03 MT04 MT05 MT06 Average 1 Sequential no 33.76 31.88 33.15 30.55 27.47 30.76 2 Tree-LSTM no 33.83 33.15 33.81 31.22 27.86 31.51(+0.75) 3 Tree-GRU no 35.39 33.62 35.1 32.55 28.26 32.38(+1.62) 4 Bidirectional no 35.52 33.91 35.51 33.34 29.91 33.17(+2.41) 5 Sequential word 34.21 32.73 34.17 31.64 28.29 31.71(+0.95) 6 Tree-LSTM word 35.81 33.62 34.84 32.6 28.52 32.40(+1.64) 7 Tree-GRU word 35.91 33.71 35.46 33.02 29.14 32.84(+2.08) 8 Bidirectional word 36.14 35.00 36.07 33.74 30.40 33.80(+3.04) 9 Tree-LSTM tree 34.97 33.91 35.21 33.08 29.38 32.90(+2.14) 10 Tree-GRU tree 35.67 34.25 35.72 33.47 29.95 33.35(+2.59) 11 Bidirectional tree 36.57 35.64 36.63 34.35 30.57 34.30(+3.54) Table 2: BLEU scores of different systems. “Sequential”, “Tree-LSTM”, “Tree-GRU” and “Bidirectional” denote the encoder part for the standard sequential encoder, Tree-LSTM encoder, Tree-GRU encoder and the bidirectional tree encoder, respectively. “no”, “word” and “tree” in column “Coverage” represents the decoder part for using no coverage (standard attention), word coverage (Tu et al., 2016) and our proposed tree-coverage model, respectively. # System Coverage MT02 MT03 MT04 MT05 MT06 Average 12′ Seq-LSTM no 34.98 32.81 34.08 31.39 28.03 31.58(+0.82) 13′ SeqTree-LSTM no 35.28 33.56 34.94 32.64 29.26 32.60(+1.84) Table 3: BLEU scores of different systems based on LSTM. “Seq-LSTM” denotes both the encoder and decoder parts for the sequential model are based on LSTM; “SeqTree-LSTM” means using Tree-LSTM encoder on top of “Seq-LSTM”. We use Adadelta (Zeiler, 2012) for optimization using a mini-batch size of 32. All other settings are the same as in Bahdanau et al. (2015). We use case insensitive 4-gram BLEU (Papineni et al., 2002) for evaluation, as calculated by multi-bleu.perl in the Moses toolkit.7 4.3 Tree Encoders This set of experiments evaluates the effectiveness of our proposed tree encoders. Table 2, row 2 confirms the finding of Eriguchi et al. (2016) that a Tree-LSTM encoder helps, and row 3 shows that our Tree-GRU encoder gets a better result (+0.87 BLEU, v.s. row 2). To verify our assumption that model consistency is important for performance, we also conduct experiments to compare TreeLSTM and Tree-GRU on top of LSTM-based encoder-decoder settings. Tree-Lstm with LSTM based sequential model can obtain 1.02 BLEU improvement(Table 3, row 13′), while Tree-LSTM with GRU based sequential model only gets 0.75 BLEU improvement. Although Tree-Lstm with LSTM based sequential model obtain a slightly better result(+0.22 BLEU, v.s. Table 2, row 3), it 7http://www.statmt.org/moses has more parameters(+1.6M) and takes 1.3 times longer for training. Since the annotation size of our bidirectional tree encoder is twice of the Tree-LSTM encoder, we halved the size of the hidden layers in the sequential encoder to 512 in each direction, to make fair comparison. These results are shown in Table 4. Row 4′ shows that, even with the same annotation size, our bidirectional tree encoder works better than the original Tree-LSTM encoder (row 2). In fact, our halved-sized unidirectional TreeGRU encoder (row 3′) also works better than the Tree-LSTM encoder (row 2) with half of its annotation size. We also compared our bidirectional tree encoder with the head-lexicalization based bidirectional tree encoder proposed by Teng and Zhang (2016), which forms the input vector for each nonleaf node by a bottom-up head propagation mechanism (Table 4, row 14′). Our bidirectional tree encoder gives a better result, suggesting that head word information may not be as helpful for machine translation as it is for syntactic parsing. When we set the hidden size back to 1024, we found that training the bidirectional tree encoder 1942 # Encoder Coverage MT02 MT03 MT04 MT05 MT06 Average 3′ Tree-GRU no 34.92 32.79 34.16 32.03 28.75 31.93(+1.17) 4′ Bidirectional no 35.02 32.64 35.04 32.50 29.72 32.48(+1.72) 14′ Bidirectional-head no 34.66 33.17 34.78 31.70 28.47 32.03(+1.27) Table 4: Experiments with 512 hidden units in each direction of the sequential encoder. The bidirectional tree encoder using head-lexicalization (Bidirectional-head), proposed by (Teng and Zhang, 2016), does not work as well as our simpler bidirectional tree encoder (Bidirectional). was more difficult. Therefore, we adopted a twophase training strategy: first, we train the parameters of the bottom-up encoder based NMT system; then, with the initialization of bottom-up encoder and random initialization of the top-down part and decoder, we train the bidirectional tree encoder based NMT system. Table 2, row 4 shows the results of this two-phase training: the bidirectional model (row 4) is 0.79 BLEU better than our unidirectional Tree-GRU (row 3). 4.4 Tree-Coverage Model Rows 5–8 in Table 2 show that the word coverage model of Tu et al. (2016) consistently helps when used with our proposed tree encoders, with the bidirectional tree encoder remaining the best. However, the improvements of the tree encoder models are smaller than that of the baseline system. This may be caused by the fact that the word coverage model neglects the relationship among the trees, e.g. the relationship between children and parent nodes. Our tree-coverage model consistently improves performance further (rows 9–11). Our best model combines our bidirectional tree encoder with our tree-coverage model (row 11), yielding a net improvement of +3.54 BLEU over the standard attentional model (row 1), and +1.90 BLEU over the stronger baseline that implements both the bottom-up tree encoder and coverage model from previous work (row 6). As noted before, the original coverage model does not take word order into account. For comparison, we also implement an extension of the coverage model that lets each coverage vector also depend on those of its left and right neighbors at the previous time step. This model does not help; in fact, it reduces BLEU by about 0.2. 4.5 Analysis By Sentence Length Following Bahdanau et al. (2015), we bin the development and test sentences by length and show BLEU scores for each bin in Figure 5. The proposed bidirectional tree encoder outperforms the Figure 5: Performance of translations with respect to the lengths of the source sentences. “+” indicates the improvement over the baseline sequential model. sequential NMT system and the Tree-GRU encoder across all lengths. The improvements become larger for sentences longer than 20 words, and the biggest improvement is for sentences longer than 50 words. This provides some evidence for the importance of syntactic information for long sentences. 5 Related Work Recently, many studies have focused on using explicit syntactic tree structure to help learn sentence representations for various sentence classification tasks. For example, Teng and Zhang (2016) and Kokkinos and Potamianos (2017) extend the bottom-up model to a bidirectional model for classification tasks, using Tree-LSTMs with head lexicalization and Tree-GRUs, respectively. We draw on some of these ideas and apply them to machine translation. We use the representation learnt from tree structures to enhance the original sequential model, and make use of these syntactic information during the generation phase. In NMT systems, the attention model (Bahdanau et al., 2015) becomes a crucial part of the 1943 decoder model. Cohn et al. (2016) and Feng et al. (2016) extend the attentional model to include structural biases from word based alignment models. Kim et al. (2017) incorporate richer structural distributions within deep networks to extend the attention model. Our contribution to the decoder model is to directly exploit structural information in the attention model combined with a coverage mechanism. 6 Conclusion We have investigated the potential of using explicit source-side syntactic trees in NMT by proposing a novel syntax-aware encoder-decoder model. Our experiments have demonstrated that a top-down encoder is a useful enhancement for the original bottom-up tree encoder (Eriguchi et al., 2016); and incorporating syntactic structure information into the decoder can better control the translation. Our analysis suggests that the benefit of source-side syntax is especially strong for long sentences. Our current work only uses the structure part of the syntactic tree, without the labels. For future work, it will be interesting to make use of node labels from the tree, or to use syntactic information on the target side, as well. Acknowledgments The authors would like to thank the anonymous reviewers for their valuable comments. This work is supported by the National Science Foundation of China (No. 61672277, 61300158, 61472183). Part of Huadong Chen’s contribution was made when visiting University of Notre Dame. His visit was supported by the joint PhD program of China Scholarship Council. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR 2015. http://arxiv.org/abs/1409.0473. David Chiang. 2007. Hierarchical phrase-based translation. Compututational Linguistics 33(2):201–228. https://doi.org/10.1162/coli.2007.33.2.201. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machine translation: Encoder–decoder approaches. In Proc. Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. pages 103–111. http://www.aclweb.org/anthology/W14-4012. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proc. EMNLP. pages 1724–1734. http://www.aclweb.org/anthology/D14-1179. Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In Proc. NAACL HLT. pages 876–885. http://www.aclweb.org/anthology/N16-1102. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proc. ACL. pages 823– 833. http://www.aclweb.org/anthology/P16-1078. Shi Feng, Shujie Liu, Nan Yang, Mu Li, Ming Zhou, and Kenny Q. Zhu. 2016. Improving attention modeling with implicit distortion and fertility for machine translation. In Proc. COLING. pages 3082– 3092. http://aclweb.org/anthology/C16-1290. Heidi J. Fox. 2002. Phrasal cohesion and statistical machine translation. In Proc. EMNLP. pages 304– 3111. https://doi.org/10.3115/1118693.1118732. Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. 2017. Structured attention networks. In Proc. ICLR. http://arxiv.org/abs/1702.00887. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. NAACL HLT. pages 48–54. https://doi.org/10.3115/1073445.1073462. Filippos Kokkinos and Alexandros Potamianos. 2017. Structural attention neural networks for improved sentiment analysis. In Proc. EACL. pages 586–591. http://www.aclweb.org/anthology/E17-2093. Jiwei Li, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proc. EMNLP. pages 2304–2314. http://aclweb.org/anthology/D15-1278. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proc. ACL. pages 609–616. https://doi.org/10.3115/1220175.1220252. Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. In Proc. EMNLP. pages 955–960. https://aclweb.org/anthology/D16-1096. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. ACL. pages 311–318. https://doi.org/10.3115/1073083.1073135. 1944 Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proc. NAACL HLT. pages 404–411. http://www.aclweb.org/anthology/N/N07/N071051. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Proc. EMNLP. pages 1526–1534. https://aclweb.org/anthology/D16-1159. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104– 3112. http://papers.nips.cc/paper/5346-sequenceto-sequence-learning-with-neural-networks. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proc. ACL-IJCNLP. pages 1556–1566. http://www.aclweb.org/anthology/P15-1150. Zhiyang Teng and Yue Zhang. 2016. Bidirectional tree-structured LSTM with head lexicalization. arXiv:1611.06788. http://arxiv.org/abs/1611.06788. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proc. ACL. pages 76–85. http://www.aclweb.org/anthology/P16-1008. Naiwen Xue, Fei Xia, Fu-dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Nat. Lang. Eng. 11(2):207–238. https://doi.org/10.1017/S135132490400364X. Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proc. ACL. pages 523–530. https://doi.org/10.3115/1073012.1073079. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701. http://arxiv.org/abs/1212.5701. Yue Zhang and Stephen Clark. 2009. Transition-based parsing of the Chinese Treebank using a global discriminative model. In Proc. IWPT. pages 162–171. http://www.aclweb.org/anthology/W09-3825. 1945
2017
177
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1946–1958 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1178 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1946–1958 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1178 Cross-lingual Name Tagging and Linking for 282 Languages Xiaoman Pan1, Boliang Zhang1, Jonathan May2, Joel Nothman3, Kevin Knight2, Heng Ji1 1 Computer Science Department, Rensselaer Polytechnic Institute {panx2,zhangb8,jih}@rpi.edu 2 Information Sciences Institute, University of Southern California {jonmay,knight}@isi.edu 3 Sydney Informatics Hub, University of Sydney [email protected] Abstract The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of new KB mining methods: generating “silver-standard” annotations by transferring annotations from English to other languages through crosslingual links and KB properties, refining annotations through self-training and topic selection, deriving language-specific morphology features from anchor links, and mining word translation pairs from crosslingual links. Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data. All the data sets, resources and systems for 282 languages are made publicly available as a new benchmark 1. 1 Introduction Information provided in languages which people can understand saves lives in crises. For example, language barrier was one of the main difficulties faced by humanitarian workers responding to the Ebola crisis in 2014. We propose to break language barriers by extracting information (e.g., entities) from a massive variety of languages and ground the information into an existing knowledge base which is accessible to a user in his/her own 1http://nlp.cs.rpi.edu/wikiann language (e.g., a reporter from the World Health Organization who speaks English only). Wikipedia is a massively multi-lingual resource that currently hosts 295 languages and contains naturally annotated markups 2 and rich informational structures through crowd-sourcing for 35 million articles in 3 billion words. Name mentions in Wikipedia are often labeled as anchor links to their corresponding referent pages. Each entry in Wikipedia is also mapped to external knowledge bases such as DBpedia3, YAGO (Mahdisoltani et al., 2015) and Freebase (Bollacker et al., 2008) that contain rich properties. Figure 1 shows an example of Wikipedia markups and KB properties. We leverage these markups for develop✤Wikipedia Article: Mao Zedong (d. 26 Aralık 1893 - ö. 9 Eylül 1976), Çinli devrimci ve siyasetçi. Çin Komünist Partisinin (ÇKP) ve Çin Halk Cumhuriyetinin kurucusu. ✤Wikipedia Markup: [[Mao Zedong]] (d. [[26 Aralık]] [[1893]] - ö. [[9 Eylül]] [[1976]]), Çinli devrimci ve siyasetçi. [[Çin Komünist Partisi]]nin (ÇKP) ve [[Çin Halk Cumhuriyeti]]nin kurucusu. tr/Çin_K en/Comm e.g., [[Çin Komü KB Properties (e.g., DBpedia, YAGO) formationDate headquarter ideology … (Mao Zedong (December 26, 1893 - September 9, 1976) is a Chinese revolutionary and politician. The founder of the Chinese Communist Party (CCP) and the People's Republic of China.) tr/Çin_Komünist_Partisi Anchor Link en/Communist_Party_of_China Cross-lingual Link e.g., [[Çin Komünist Partisi]]nin nin Wikipedia Topic Categories Ruling Communist parties Chinese Civil War Parties of one-party systems … Affix Figure 1: Examples of Wikipedia Markups and KB Properties ing a language universal framework to automatically extract name mentions from documents in 2https://en.wikipedia.org/wiki/Help:Wiki markup 3http://wiki.dbpedia.org 1946 282 languages, and link them to an English KB (Wikipedia in this work). The major challenges and our new solutions are summarized as follows. Creating “Silver-standard” through crosslingual entity transfer. The first step is to classify English Wikipedia entries into certain entity types and then propagate these labels to other languages. We exploit the English Abstract Meaning Representation (AMR) corpus (Banarescu et al., 2013) which includes both name tagging and linking annotations for fine-grained entity types to train an automatic classifier. Furthermore, we exploit each entry’s properties in DBpedia as features and thus eliminate the need of language-specific features and resources such as part-of-speech tagging as in previous work (Section 2.2). Refine annotations through self-training. The initial annotations obtained from above are too incomplete and inconsistent. Previous work used name string match to propagate labels. In contrast, we apply self-training to label other mentions without links in Wikipedia articles even if they have different surface forms from the linked mentions (Section 2.4). Customize annotations through cross-lingual topic transfer. For the first time, we propose to customize name annotations for specific downstream applications. Again, we use a cross-lingual knowledge transfer strategy to leverage the widely available English corpora to choose entities with specific Wikipedia topic categories (Section 2.5). Derive morphology analysis from Wikipedia markups. Another unique challenge for morphologically rich languages is to segment each token into its stemming form and affixes. Previous methods relied on either high-cost supervised learning (Roth et al., 2008; Mahmoudi et al., 2013; Ahlberg et al., 2015), or low-quality unsupervised learning (Gr¨onroos et al., 2014; Ruokolainen et al., 2016). We exploit Wikipedia markups to automatically learn affixes as language-specific features (Section 2.3). Mine word translations from cross-lingual links. Name translation is a crucial step to generate candidate entities in cross-lingual entity linking. Only a small percentage of names can be directly translated by matching against cross-lingual Wikipedia title pairs. Based on the observation that Wikipedia titles within any language tend to follow a consistent style and format, we propose an effective method to derive word translation pairs from these titles based on automatic alignment (Section 3.2). 2 Name Tagging 2.1 Overview Our first step is to generate “silver-standard” name annotations from Wikipedia markups and train a universal name tagger. Figure 2 shows our overall procedure and the following subsections will elaborate each component. [[Мітт Ромні]]Politician|PER народився в [[Детройт]]City|GPE, [[Мічиган]]State|GPE. Закінчив [[Гарвардський університет]]University|ORG. ❖Classify English KB pages using KB properties as features, 
 trained from AMR annotations en/Mitt_Romney Politician| PER birthPlace, governor,
 party, successor, …… en/Detroit City|GPE areaCode, areaTotal, postalCode, elevation, …… en/Michigan State|GPE demonym, largestCity, language, country, …… en/Harvard_ University University| ORG numberOfStudents, motto location, campus, …… ❖Propagate classification results using cross-lingual links and project classification results to anchor links en/Michigan State|GPE Ukrainian: uk/Мічиган Amharic: am/ሚሺጋን Tibetan: bo/མི་ཅི་གྷན། Tamil: ta/!c#க% Thai: th/รัฐมิชิแกน …… Cross-lingual Links ❖Apply self-training for unlabeled data Training Data Name Tagger Unlabeled Data Train Tag Add High Confident Instances ❖Select seeds to train an initial name tagger Training Data Seeds Select Generate (Sec. 2.2) ✤Annotation Generation (Section 2.2) ✤Self Training (Section 2.3) Train ✤Training Data Selection (Section 2.4) Wikipedia Articles Training Data Entity Commonness Topic Relatedness Based Ranking Selected Data (Mitt Romney was born in Detroit, Michigan. He graduated from Harvard University.) Propagate Project Figure 2: Name Tagging Annotation Generation and Training 2.2 Initial Annotation Generation We start by assigning an entity type or “other” to each English Wikipedia entry. We utilize the AMR corpus where each entity name mention is manually labeled as one of 139 types 1947 and linked to Wikipedia if it’s linkable. In total we obtain 2,756 entity mentions, along with their AMR entity types, Wikipedia titles, YAGO entity types and DBpedia properties. For each pair of AMR entity type ta and YAGO entity type ty, we compute the Pointwise Mutual Information (PMI) (Ward Church and Hanks, 1990) of mapping ta to ty across all mentions in the AMR corpus. Therefore, each name mention is also assigned a list of YAGO entity types, ranked by their PMI scores with AMR types. In this way, our framework produces three levels of entity typing schemas with different granularity: 4 main types (Person (PER), Organization (ORG), Geo-political Entity (GPE), Location (LOC)), 139 types in AMR, and 9,154 types in YAGO. Then we leverage an entity’s properties in DBpedia as features for assigning types. For example, an entity with a birth date is likely to be a person, while an entity with a population property is likely to be a geo-political entity. Using all DBpedia entity properties as features (60,231 in total), we train Maximum Entropy models to assign types with three levels of granularity to all English Wikipedia pages. In total we obtained 10 million English pages labeled as entities of interest. Nothman et al. (2013) manually annotated 4,853 English Wikipedia pages with 6 coarsegrained types (Person, Organization, Location, Other, Non-Entity, Disambiguation Page). Using this data set for training and testing, we achieved 96.0% F-score on this initial step, slightly better than their results (94.6% F-score). Next, we propagate the label of each English Wikipedia page to all entity mentions in all languages in the entire Wikipedia through monolingual redirect links and cross-lingual links. 2.3 Learning Model and KB Derived Features We use a typical neural network architecture that consists of Bi-directional Long Short-Term Memory and Conditional Random Fields (CRFs) network (Lample et al., 2016) as our underlying learning model for the name tagger for each language. In the following we will describe how we acquire linguistic features. When a Wikipedia user tries to link an entity mention in a sentence to an existing page, she/he will mark the title (the entity’s canonical form, without affixes) within the mention using brackets “[[]]”, from which we can naturally derive a word’s stem and affixes for free. For example, from the Wikipedia markups of the following Turkish sentence: “Kıta Fransası, g¨uneyde [[Akdeniz]]den kuzeyde [[Mans¸ Denizi]] ve [[Kuzey Denizi]]ne, do˘guda [[Ren Nehri]]nden batıda [[Atlas Okyanusu]]na kadar yayılan topraklarda yer alır. (Metropolitan France extends from the Mediterranean Sea to the English Channel and the North Sea, and from the Rhine to the Atlantic Ocean.)”, we can learn the following suffixes: “den”, “ne”, “nden” and “na”. We use such affix lists to perform basic word stemming, and use them as additional features to determine name boundary and type. For example, “den” is a noun suffix which indicates ablative case in Turkish. [[Akdeniz]]den means “from Mediterranean Sea”. Note that this approach can only perform morphology analysis for words whose stem forms and affixes are directly concatenated. Table 1 summarizes name tagging features. Features Descriptions Form Lowercase forms of (w−1, w0, w+1) Case Case of w0 Syllable The first and the last character of w0 Stem Stems of (w−1, w0, w+1) Affix Affixes of (w−1, w0, w+1) Gazetteer Cross-lingual gazetteers learned from training data Embeddings Character embeddings and word embeddings 4learned from training data Table 1: Name Tagging Features 2.4 Self-Training to Enrich and Refine Labels The name annotations acquired from the above procedure are far from complete to compete with manually labeled gold-standard data. For example, if a name mention appears multiple times in a Wikipedia article, only the first mention is labeled with an anchor link. We apply self-training to propagate and refine the labels. We first train an initial name tagger using seeds selected from the labeled data. We adopt an idea from (Guo et al., 2014) which computes Normalized Pointwise Mutual Information (NPMI) (Bouma, 2009) between a tag and a token: 4For languages that don’t have word segmentation, we consider each character as a token, and use character embeddings only. 1948 NPMI(tag, token) = ln p(tag,token) p(tag)p(token) −ln p(tag, token) (1) Then we select the sentences in which all annotations satisfy NPMI(tag, token) > τ as seeds 5. For all Wikipedia articles in a language, we cluster the unlabeled sentences into n clusters 6 by collecting sentences with low cross-entropy into the same cluster. Then we apply the initial tagger to the first unlabeled cluster, select the automatically labeled sentences with high confidence, add them back into the training data, and then re-train the tagger. This procedure is repeated n times until we scan through all unlabeled data. 2.5 Final Training Data Selection for Populous Languages For some populous languages that have many millions of pages in Wikipedia, we obtain many sentences from self-training. In some emergent settings such as natural disasters it’s important to train a system rapidly. Therefore we develop the following effective methods to rank and select high-quality annotated sentences. Commonness: we prefer sentences that include common entities appearing frequently in Wikipedia. We rank names by their frequency and dynamically set the frequency threshold to select a list of common names. We first initialize the name frequency threshold S to 40. If the number of the sentences is more than a desired size D for training 7, we set the threshold S = S + 5, otherwise S = S −5. We iteratively run the selection algorithm until the size of the training set reaches D for a certain S. Topical Relatedness: Various criteria should be adopted for different scenarios. Our previous work on event extraction (Li et al., 2011) found that by carefully select 1/3 topically related training documents for a test set, we can achieve the same performance as a model trained from the entire training set. Using an emergent disaster setting as a use case, we prefer sentences that include entities related to disaster related topics. We run an English name tagger (Manning et al., 2014) and entity linker (Pan et al., 2015) on the Leidos corpus released by the DARPA LORELEI 5τ = 0 in our experiment. 6n = 20 in our experiment. 7D = 30,000 in our experiment. program 8. The Leidos corpus consists of documents related to various disaster topics. Based on the linked Wikipedia pages, we rank the frequency of Wikipedia categories and select the top 1% categories (4,035 in total) for our experiments. Some top-ranked topic labels include “International medical and health organizations”, “Human rights organizations”, “International development agencies”, “Western Asian countries”, “Southeast African countries”and “People in public health”. Then we select the annotated sentences including names (e.g., “World Health Organization”) in all languages labeled with these topic labels to train the final model. 3 Cross-lingual Entity Linking 3.1 Overview After we extract names from test documents in a source language, we translate them into English by automatically mined word translation pairs (Section 3.2), and then link translated English mentions to an external English KB (Section 3.3). The overall linking process is illustrated in Figure 3. m1 m2 m3 m4 m5 m6 t5 t1 t4 t3 t6 t2 Translate to English (e.g., m1 to t1) Construct Knowledge Networks (KNs) KNs in English KB Salience, Similarity and Coherence Comparison Tagged Mentions Linking KNs in Source Translated and Linked Mentions e1 t1 m1 e1 t2 m2 e2 t3 m3 e3 t4 m4 t5 m5 NIL t6 m6 NIL Figure 3: Cross-lingual Entity Linking Overview 3.2 Name Translation The cross-lingual Wikipedia title pairs, generated through crowd-sourcing, generally follow a consistent style and format in each language. From Table 2 we can see that the order of modifier and head word keeps consistent in Turkish and English titles. 8http://www.darpa.mil/program/low-resource-languagesfor-emergent-incidents 1949 Extracted Cross-lingual Wikipedia Title Pairs “Pekin” Pekin Beijing Pekin metrosu Beijing Subway Pekin Ulusal Stadyumu Beijing National Stadium “Teknoloji” N¨ukleer teknoloji Nuclear technology Teknoloji transferi Technology transfer Teknoloji e˘gitimi Technology education “Enstit¨us¨u” Torchwood Enstit¨us¨u Torchwood Institute Hudson Enstit¨us¨u Hudson Institute Smolny Enstit¨us¨u Smolny Institute “Pekin Teknoloji” [NONE] “Teknoloji Enstit¨us¨u” Kraliyet Teknoloji Enstit¨us¨u Royal Institute of Technology Karlsruhe Teknoloji Enstit¨us¨u Karlsruhe Institute of Technology Georgia Teknoloji Enstit¨us¨u Georgia Institute of Technology “Pekin Teknoloji Enstit¨us¨u” [NONE] Mined Word Translation Pairs Word Translation Alignment Confidence pekin Beijing Exact Match beijing 0.5263 peking 0.3158 teknoloji technology 0.8833 technological 0.0167 singularity 0.0167 enstit¨us¨u institute 0.2765 of 0.2028 for 0.0221 Table 2: Word Translation Mining from Crosslingual Wikipedia Title Pairs For each name mention, we generate all possible combinations of continuous tokens. For example, no Wikipedia titles contain the Turkish name “Pekin Teknoloji Enstit¨us¨u (Beijing Institute of Technology)”. We generate the following 6 combinations: “Pekin”, “Teknoloji”, “Enstit¨us¨u”, “Pekin Teknoloji”, “Teknoloji Enstit¨us¨u” and “Pekin Teknoloji Enstit¨us¨u”, and then extract all cross-lingual Wikipedia title pairs containing each combination. Finally we run GIZA++ (Josef Och and Ney, 2003) to extract word for word translations from these title pairs, as shown in Table 2. 3.3 Entity Linking Given a set of tagged name mentions M = {m1, m2, ..., mn}, we first obtain their English translations T = {t1, t2, ..., tn} using the approach described above. Then we apply an unsupervised collective inference approach to link T to the KB, similar to our previous work (Pan et al., 2015). The only difference is that we construct knowledge networks (KNs) g(ti) for T based on their co-occurrence within a context window 9 instead of their AMR relations, because AMR parsing is not available for foreign languages. For each translated name mention ti, an initial list of candidate entities E(ti) = {e1, e2, ..., ek} is generated based on a surface form dictionary mined from KB properties (e.g., redirects, names, aliases). If no surface form can be matched then we determine the mention as unlinkable. Then we construct KNs g(ej) for each entity candidate ej in ti’s entity candidate list E(ti). We compute the similarity between g(ti) and g(ej) based on three measures: salience, similarity and coherence, and select the candidate entity with the highest score. 4 Experiments 4.1 Performance on Wikipedia Data We first conduct an evaluation using Wikipedia data as “silver-standard”. For each language, we use 70% of the selected sentences for training and 30% for testing. For entity linking, we don’t have ground truth for unlinkable mentions, so we only compute linking accuracy for linkable name mentions. Table 3 presents the overall performance for three coarse-grained entity types: PER, ORG and GPE/LOC, sorted by the number of name mentions. Figure 4 and Figure 5 summarize the performance, with some example languages marked for various ranges of data size. Japanese 79.2 Thai 56.2 Tamil 77.9 Kannada 60.1 Kabyle 75.7 Burmese 51.5 Rundi 40.0 Nyanja 56.0 Xhosa 35.3 20 40 60 80 100 Name Tagging F-score (%) [10k, 12m] [500, 10k) (0, 500) Number of Name Mentions Figure 4: Summary of Name Tagging F-score (%) on Wikipedia Data Not surprisingly, name tagging performs better for languages with more training mentions. The 9In our experiments, we use the previous four and next four name mentions as a context window. 1950 F-score is generally higher than 80% when there are more than 10K mentions, and it significantly drops when there are less than 250 mentions. The languages with low name tagging performance can be categorized into three types: (1) the number of mentions is less than 2K, such as AtlanticCongo (Wolof), Berber (Kabyle), Chadic (Hausa), Oceanic (Fijian), Hellenic (Greek), Igboid (Igbo), Mande (Bambara), Kartvelian (Georgian, Mingrelian), Timor-Babar (Tetum), Tupian (Guarani) and Iroquoian (Cherokee) language groups; Precision is generally higher than recall for most of these languages, because the small number of linked mentions is not enough to cover a wide variety of entities. (2) there is no space between words, including Chinese, Thai and Japanese; (3) they are not written in latin script, such as the Dravidian group (Tamil, Telugu, Kannada, Malayalam). The training instances for various entity types are quite imbalanced for some languages. For example, Latin data includes 11% PER names, 84% GPE/LOC names and 5% ORG names. As a result, the performance of ORG is the lowest, while GPE and LOC achieve higher than 75% F-scores for most languages. Esperanto 81.4 Chechen 93.5 Croatian 88.6 Maori 93.4 Yiddish 87.2 Odia 77.9 Akan 92.2 Sango 86.8 Rundi 78.6 60 70 80 90 100 Entity Linking Accuracy (%) [10k, 12m] [500, 10k) (0, 500) Number of Name Mentions Figure 5: Summary of Entity Linking Accuracy (%) on Wikipedia Data The linking accuracy is higher than 80% for most languages. Also note that since we don’t have perfect annotations on Wikipedia data for any language, these results can be used to estimate how predictable our “silver-standard” data is, but they are not directly comparable to traditional name tagging results measured against goldstandard data annotated by human. 10The mapping to language names can be found at http://nlp.cs.rpi.edu/wikiann/mapping 4.2 Performance on Non-Wikipedia Data In order to have more direct comparison with state-of-the-art name taggers trained from human annotated gold-standard data, we conduct experiments on non-Wikipedia data in 9 languages for which we have human annotated ground truths from the DARPA LORELEI program. Table 4 shows the data statistics. The documents are from news sources and discussion fora. For fair comparison, we use the same learning method and feature set as described in Section 2.3 to train the models using gold-standard data. Therefore the results of our models trained from gold-standard data are slightly different from some previous work such as (Tsai et al., 2016), mainly due to different learning algorithms and different features sets. For example, the gazetteers we used are different from those in (Tsai et al., 2016), and we did not use brown clusters as additional features. The name tagging results on LORELEI data set are presented in Table 5. We can see that our approach advances state-of-the-art languageindependent methods (Zhang et al., 2016a; Tsai et al., 2016) on the same data sets for most languages, and achieves 6.5% - 17.6% lower F-scores than the models trained from manually annotated gold-standard documents that include thousands of name mentions. To fill in this gap, we would need to exploit more linguistic resources. Mayfield et al. (2011) constructed a crosslingual entity linking collection for 21 languages, which covers ground truth for the largest number of languages to date. Therefore we compare our approach with theirs that uses a supervised name transliteration model (McNamee et al., 2011). The entity linking results on non-NIL mentions are presented in Table 6. We can see that except Romanian, our approach outperforms or achieves comparable accuracy as their method on all languages, without using any additional resources or tools such as name transliteration. 4.3 Analysis Impact of KB-derived Morphological Features We measured the impact of our affix lists derived from Wikipedia markups on two morphologicallyrich languages: Turkish and Uzbek. The morphol11McNamee et al. (2011) did not develop a model for Chinese even though Chinese data set was included in the collection. 1951 L M F A L M F A L M F A L M F A en 12M 91.8 84.3 mr 18K 82.4 89.8 szl 3.0K 82.7 92.2 tet 1.2K 73.5 92.2 ja 1.9M 79.2 86.7 bar 17K 97.1 93.1 tk 2.9K 86.3 90.1 sc 1.2K 78.1 91.6 sv 1.8M 93.6 89.7 cv 15K 95.7 93.2 z-c 2.9K 88.2 87.0 wuu 1.2K 79.7 90.8 de 1.7M 89.0 89.8 ba 15K 93.8 92.6 mn 2.9K 76.4 84.4 ksh 1.2K 56.0 83.6 fr 1.4M 93.3 91.2 mg 14K 98.7 90.1 kv 2.9K 89.7 93.2 pfl 1.1K 42.9 80.4 ru 1.4M 90.1 90.0 hi 14K 86.9 88.0 f-v 2.9K 65.4 88.8 haw 1.1K 88.0 84.6 it 1.2M 96.6 90.2 an 14K 93.0 91.1 gan 2.9K 84.9 90.9 am 1.1K 84.7 83.0 sh 1.1M 97.8 90.9 als 14K 85.0 90.9 fur 2.8K 84.5 89.2 bcl 1.1K 82.3 91.7 es 992K 93.9 90.2 sco 14K 86.8 89.6 kw 2.8K 94.0 93.3 nah 1.1K 89.9 89.6 pl 931K 90.0 91.3 bug 13K 99.9 90.0 ilo 2.8K 90.3 91.1 udm 1.1K 88.9 85.0 nl 801K 93.2 91.5 lb 13K 81.5 88.4 mwl 2.7K 76.1 89.4 su 1.1K 72.7 89.2 zh 718K 82.0 90.0 fy 13K 86.6 91.2 mai 2.7K 99.7 90.0 dsb 1.1K 84.7 82.1 pt 576K 90.7 90.3 new 12K 98.2 91.5 nv 2.7K 90.9 91.6 tpi 1.1K 83.3 90.1 uk 472K 91.5 89.4 ga 12K 85.3 91.3 sd 2.7K 65.8 90.9 lo 1.0K 52.8 88.6 cs 380K 94.6 90.5 ht 12K 98.9 93.4 os 2.7K 87.4 89.4 bpy 1.0K 98.3 89.3 sr 365K 95.3 91.2 war 12K 94.9 89.8 mzn 2.6K 86.4 86.9 ki 1.0K 97.5 90.0 hu 357K 95.9 90.4 te 11K 80.5 86.1 azb 2.6K 88.4 90.6 ty 1.0K 86.7 89.8 fi 341K 93.4 90.6 is 11K 80.2 83.2 bxr 2.6K 75.0 90.3 hif 1.0K 81.1 93.1 no 338K 94.1 90.6 pms 10K 98.0 89.5 vec 2.6K 87.9 91.3 ady 979 92.7 91.2 fa 294K 96.4 86.4 zea 10K 86.8 90.3 bo 2.6K 70.4 88.9 ig 968 74.4 91.8 ko 273K 90.6 89.8 sw 9.3K 93.4 90.8 yi 2.6K 76.9 87.2 tyv 903 91.1 91.0 ca 265K 90.3 90.3 ia 8.9K 75.4 90.5 frp 2.5K 86.2 92.3 tn 902 76.9 90.1 tr 223K 96.9 87.3 qu 8.7K 92.5 88.2 myv 2.5K 88.6 92.2 cu 898 75.5 91.3 ro 197K 90.6 89.2 ast 8.3K 89.2 92.0 se 2.5K 90.3 83.5 sm 888 80.0 85.3 bg 186K 65.8 88.4 rm 8.0K 82.0 91.3 cdo 2.5K 91.0 91.9 to 866 92.3 90.7 ar 185K 88.3 89.7 ay 7.9K 88.5 91.0 nso 2.5K 98.9 90.0 tum 831 93.8 92.9 id 150K 87.8 90.0 ps 7.7K 66.9 89.9 gom 2.4K 88.8 90.0 r-r 750 93.0 85.9 he 145K 79.0 91.0 mi 7.5K 95.9 93.4 ky 2.4K 71.8 88.4 om 709 74.2 81.1 eu 137K 82.5 89.2 gag 7.3K 89.3 84.0 n-n 2.3K 92.6 91.6 glk 688 59.5 80.7 da 133K 87.1 85.8 nds 7.0K 84.5 89.8 ne 2.3K 81.5 91.1 lbe 651 88.9 90.8 vi 125K 89.6 82.0 gd 6.7K 92.8 91.3 sa 2.2K 73.9 91.3 bjn 640 64.7 89.5 th 96K 56.2 87.7 mrj 6.7K 97.0 91.6 mt 2.2K 82.3 90.3 srn 619 76.5 89.3 sk 93K 87.3 90.3 so 6.5K 85.8 91.7 my 2.2K 51.5 91.2 mdf 617 82.2 92.4 uz 92K 98.3 90.3 co 6.0K 85.4 89.9 bh 2.2K 92.6 92.5 tw 572 94.6 90.4 eo 85K 88.7 81.4 pnb 6.0K 90.8 86.2 vls 2.2K 78.2 89.1 pih 555 87.2 89.0 la 81K 90.8 89.4 pcd 5.8K 86.1 90.8 ug 2.1K 79.7 92.4 rmy 551 68.5 86.4 z-m 79K 99.3 89.2 wa 5.8K 81.6 82.0 si 2.1K 87.7 90.5 lg 530 98.8 89.3 lt 79K 86.3 87.2 frr 5.7K 70.1 86.3 kaa 2.1K 55.2 89.5 chr 530 70.6 86.2 el 78K 84.6 88.3 scn 5.6K 93.2 89.2 b-s 2.1K 84.5 88.0 ha 517 75.0 87.9 ce 77K 99.4 93.5 fo 5.4K 83.6 92.2 krc 2.1K 84.9 88.9 ab 506 60.0 92.4 ur 77K 96.4 89.3 ckb 5.3K 88.1 89.3 ie 2.1K 88.8 92.8 got 506 91.7 90.1 hr 76K 82.8 88.5 li 5.2K 89.4 91.3 dv 2.0K 76.2 90.5 bi 490 88.5 88.3 ms 75K 86.8 84.1 nap 4.9K 86.9 89.9 xmf 2.0K 73.4 92.2 st 455 84.4 89.8 et 69K 86.8 89.9 crh 4.9K 90.1 89.9 rue 1.9K 82.7 92.2 chy 450 85.1 89.9 kk 68K 88.3 81.8 gu 4.6K 76.0 90.8 pa 1.8K 74.8 84.3 iu 450 66.7 88.9 ceb 68K 96.3 86.6 km 4.6K 52.2 89.9 eml 1.8K 83.5 88.5 zu 449 82.3 89.9 sl 67K 89.5 90.1 tg 4.5K 88.3 90.6 arc 1.8K 68.5 89.2 pnt 445 61.5 89.6 nn 65K 88.1 89.9 hsb 4.5K 91.5 92.0 pdc 1.8K 78.1 91.1 ik 436 94.1 88.2 sim 59K 85.7 90.7 c-z 4.5K 75.0 86.6 kbd 1.7K 74.9 80.6 lrc 416 65.2 86.9 lv 57K 92.1 89.8 jv 4.4K 82.6 87.8 pap 1.7K 88.8 58.4 bm 386 77.3 89.1 tt 53K 87.7 91.4 lez 4.4K 84.2 82.3 jbo 1.7K 92.4 91.6 za 382 57.1 88.2 gl 52K 87.4 88.2 hak 4.3K 85.5 88.1 diq 1.7K 79.3 80.9 mo 373 69.6 88.2 ka 49K 79.8 89.5 ang 4.2K 84.0 92.0 pag 1.7K 91.2 89.5 ss 362 69.2 91.8 vo 47K 98.5 90.8 r-t 4.2K 88.1 89.0 kg 1.6K 82.1 90.1 ee 297 63.2 90.0 lmo 39K 98.3 89.0 kn 4.1K 60.1 91.7 m-b 1.6K 78.3 80.0 dz 262 50.0 90.0 be 38K 84.1 88.3 csb 4.1K 87.0 92.3 rw 1.6K 95.4 91.5 ak 258 86.8 92.2 mk 35K 93.4 83.3 lij 4.1K 72.3 91.9 or 1.6K 86.4 77.9 sg 245 99.9 86.8 cy 32K 90.7 89.3 nov 4.0K 77.0 92.1 ln 1.6K 82.8 91.4 ts 236 93.3 88.9 bs 31K 84.8 89.8 ace 4.0K 81.6 90.3 kl 1.5K 75.0 90.9 rn 185 40.0 78.6 ta 31K 77.9 88.2 gn 4.0K 71.2 89.3 sn 1.5K 95.0 93.3 ve 183 99.9 88.0 hy 28K 90.4 81.3 koi 4.0K 89.6 92.9 av 1.4K 82.0 83.7 ny 169 56.0 90.2 bn 27K 93.8 87.2 mhr 3.9K 86.7 92.4 as 1.4K 89.6 89.3 ff 168 76.9 88.9 az 26K 85.1 86.0 io 3.8K 87.2 92.3 stq 1.4K 70.0 90.6 ch 159 70.6 90.0 sq 26K 94.1 92.1 min 3.8K 85.8 89.9 gv 1.3K 84.8 89.1 xh 141 35.3 89.5 ml 24K 82.4 88.8 arz 3.8K 77.8 89.3 wo 1.3K 87.7 90.0 fj 126 75.0 91.3 br 22K 87.0 85.5 ext 3.7K 77.8 91.6 xal 1.3K 98.7 90.9 ks 124 75.0 83.3 z-y 22K 87.3 88.4 yo 3.7K 94.0 90.8 nrm 1.3K 96.4 92.7 ti 52 94.2 90.0 af 21K 85.7 91.1 sah 3.6K 91.2 93.0 na 1.2K 87.6 88.7 cr 49 91.8 89.8 b-x 20K 85.1 87.7 vep 3.5K 85.8 89.8 ltg 1.2K 74.3 92.1 pi 41 83.3 86.4 tl 19K 92.7 90.3 ku 3.3K 83.2 85.1 pam 1.2K 87.2 91.0 oc 18K 92.5 90.0 kab 3.3K 75.7 84.3 lad 1.2K 92.3 92.4 Table 3: Performance on Wikipedia Data (L: language ID 10; M: the number of name mentions; F: name tagging F-score (%); A: entity linking accuracy (%)) 1952 Language Gold Training Silver Training Test Bengali 8,760 22,093 3,495 Hungarian 3,414 34,022 1,320 Russian 2,751 35,764 1,213 Tamil 7,033 25,521 4,632 Tagalog 4,648 15,839 3,351 Turkish 3,067 37,058 2,172 Uzbek 3,137 64,242 2,056 Vietnamese 2,261 63,971 987 Yoruba 4,061 9,274 3,395 Table 4: # of Names in Non-Wikipedia Data Language Training from Gold Training from Silver (Zhang et al., 2016a) (Tsai et al., 2016) Bengali 61.6 44.0 34.8 43.3 Hungarian 63.9 47.9 Russian 61.8 49.4 Tamil 42.2 35.7 26.0 29.6 Tagalog 70.7 58.3 51.3 65.4 Turkish 66.0 51.5 43.6 47.1 Uzbek 56.0 44.2 Vietnamese 54.3 44.5 Yoruba 55.1 37.6 36.0 36.7 Table 5: Name Tagging F-score (%) on NonWikipedia Data Language # of Non-NIL Mentions (Mayfield et al., 2011) Our Approach Arabic 661 70.6 80.2 Bulgarian 2,068 82.1 84.1 Chinese 956 - 11 91.0 Croatian 2,257 88.9 90.8 Czech 722 77.2 85.9 Danish 1,096 93.8 91.2 Dutch 1,087 92.4 89.2 Finnish 1,049 86.8 85.8 French 657 90.4 92.1 German 769 85.7 89.7 Greek 2,129 71.4 79.8 Italian 1,087 83.3 85.6 Macedonian 1,956 70.6 71.6 Portuguese 1,096 97.4 95.8 Romanian 2,368 93.5 88.7 Serbian 2,156 65.3 81.2 Spanish 743 87.3 91.5 Swedish 1,107 93.5 90.3 Turkish 2,169 92.5 92.2 Urdu 1,093 70.7 73.2 Table 6: Entity Linking Accuracy (%) on NonWikipedia Data ogy features contributed 11.1% and 7.1% absolute name tagging F-score gains to Turkish and Uzbek LORELEI data sets respectively. Impact of Self-Training Using Turkish as a case study, the learning curves of self-training on Wikipedia and non-Wikipedia test sets are shown in Figure 6. We can see that self-training provides significant improvement for both Wikipedia (6% absolute gain) and non-Wikipedia test data (12% absolute gain). As expected the learning curve on Wikipedia data is more smooth and converges more slowly than that of non-Wikipedia data. This indicates that when the training data is incomplete and noisy, the model can benefit from self-training through iterative label correction and propagation. Figure 6: Learning Curve of Self-training Impact of Topical Relatedness We also found that the topical relatedness measure proposed in Section 2.5 not only significantly reduces the size of training data and thus speeds up the training process for many languages, but also consistently improves the quality. For example, the Turkish name tagger trained from the entire data set without topic selection yields 49.7% Fscore on LORELEI data set, and the performance is improved to 51.5% after topic selection. 5 Related Work Wikipedia markup based silver standard generation: Our work was mainly inspired from previous work that leveraged Wikipedia markups to train name taggers (Nothman et al., 2008; Dakka and Cucerzan, 2008; Mika et al., 2008; Ringland et al., 2009; Alotaibi and Lee, 2012; Nothman et al., 2013; Althobaiti et al., 2014). Most of these previous methods manually classified many English Wikipedia entries into pre-defined entity types. In contrast, our approach doesn’t need any manual annotations or language-specific features, while generates both coarse-grained and fine-grained types. Many fine-grained entity typing approaches (Fleischman and Hovy, 2002; Giuliano, 1953 2009; Ekbal et al., 2010; Ling and Weld, 2012; Yosef et al., 2012; Nakashole et al., 2013; Gillick et al., 2014; Yogatama et al., 2015; Del Corro et al., 2015) also created annotations based on Wikipedia anchor links. Our framework performs both name identification and typing and takes advantage of richer structures in the KBs. Previous work on Arabic name tagging (Althobaiti et al., 2014) extracted entity titles as a gazetteer for stemming, and thus it cannot handle unknown names. We developed a new method to derive generalizable affixes for morphologically rich language based on Wikipedia markups. Wikipedia as background features for IE: Wikipedia pages have been used as additional features to improve various Information Extraction (IE) tasks, including name tagging (Kazama and Torisawa, 2007), coreference resolution (Paolo Ponzetto and Strube, 2006), relation extraction (Chan and Roth, 2010) and event extraction (Hogue et al., 2014). Other automatic name annotation generation methods have been proposed, including KB driven distant supervision (An et al., 2003; Mintz et al., 2009; Ren et al., 2015) and cross-lingual projection (Li et al., 2012; Kim et al., 2012; Che et al., 2013; Wang et al., 2013; Wang and Manning, 2014; Zhang et al., 2016b). Multi-lingual name tagging: Some recent research (Zhang et al., 2016a; Littell et al., 2016; Tsai et al., 2016) under the DARPA LORELEI program focused on developing name tagging techniques for low-resource languages. These approaches require English annotations for projection (Tsai et al., 2016), some input from a native speaker, either through manual annotations (Littell et al., 2016), or a linguistic survey (Zhang et al., 2016a). Without using any manual annotations, our name taggers outperform previous methods on the same data sets for many languages. Multi-lingual entity linking: NIST TAC-KBP Tri-lingual entity linking (Ji et al., 2016) focused on three languages: English, Chinese and Spanish. (McNamee et al., 2011) extended it to 21 languages. But their methods required labeled data and name transliteration. We share the same goal as (Sil and Florian, 2016) to extend cross-lingual entity linking to all languages in Wikipedia. They exploited Wikipedia links to train a supervised linker. We mine reliable word translations from cross-lingual Wikipedia titles, which enables us to adopt unsupervised English entity linking techniques such as (Pan et al., 2015) to directly link translated English name mentions to English KB. Efforts to save annotation cost for name tagging: Some previous work including (Ji and Grishman, 2006; Richman and Schone, 2008; Althobaiti et al., 2013) exploited semi-supervised methods to save annotation cost. We observed that self-training can provide further gains when the training data contains certain amount of noise. 6 Conclusions and Future Work We developed a simple yet effective framework that can extract names from 282 languages and link them to an English KB. This framework follows a fully automatic training and testing pipeline, without the needs of any manual annotations or knowledge from native speakers. We evaluated our framework on both Wikipedia articles and external formal and informal texts and obtained promising results. To the best of our knowledge, our multilingual name tagging and linking framework is applied to the largest number of languages. We release the following resources for each of these 282 languages: “silver-standard” name tagging and linking annotations with multiple levels of granularity, morphology analyzer if it’s a morphologically-rich language, and an endto-end name tagging and linking system. In this work, we treat all languages independently when training their corresponding name taggers. In the future, we will explore the topological structure of related languages and exploit cross-lingual knowledge transfer to enhance the quality of extraction and linking. The general idea of deriving noisy annotations from KB properties can also be extended to other IE tasks such as relation extraction. Acknowledgments This work was supported by the U.S. DARPA LORELEI Program No. HR0011-15-C-0115, ARL/ARO MURI W911NF-10-1-0533, DARPA DEFT No. FA8750-13-2-0041 and FA8750-13-20045, and NSF CAREER No. IIS-1523198. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. 1954 References Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2015. Paradigm classification in supervised learning of morphology. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 1024–1029. https://doi.org/10.3115/v1/N15-1107. Fahd Alotaibi and Mark Lee. 2012. Mapping arabic wikipedia into the named entities taxonomy. In Proceedings of COLING 2012: Posters. The COLING 2012 Organizing Committee, pages 43–52. http://aclweb.org/anthology/C12-2005. Maha Althobaiti, Udo Kruschwitz, and Massimo Poesio. 2013. A semi-supervised learning approach to arabic named entity recognition. In Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013. INCOMA Ltd. Shoumen, BULGARIA, pages 32–40. http://aclweb.org/anthology/R13-1005. Maha Althobaiti, Udo Kruschwitz, and Massimo Poesio. 2014. Automatic creation of arabic named entity annotated corpus using wikipedia. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 106–115. https://doi.org/10.3115/v1/E14-3012. Joohui An, Seungwoo Lee, and Gary Geunbae Lee. 2003. Automatic acquisition of named entity tagged corpus from world wide web. In The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics. http://aclweb.org/anthology/P03-2031. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. Association for Computational Linguistics, pages 178–186. http://aclweb.org/anthology/W13-2322. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data. ACM, New York, NY, USA, SIGMOD ’08, pages 1247–1250. https://doi.org/10.1145/1376616.1376746. Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. In Proceedings of the Biennial GSCL Conference 2009. Seng Yee Chan and Dan Roth. 2010. Exploiting background knowledge for relation extraction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Coling 2010 Organizing Committee, pages 152–160. http://aclweb.org/anthology/C10-1018. Wanxiang Che, Mengqiu Wang, D. Christopher Manning, and Ting Liu. 2013. Named entity recognition with bilingual constraints. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 52–62. http://aclweb.org/anthology/N13-1006. Wisam Dakka and Silviu Cucerzan. 2008. Augmenting wikipedia with named entity tags. In Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I. http://aclweb.org/anthology/I08-1071. Luciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, and Gerhard Weikum. 2015. Finet: Context-aware fine-grained named entity typing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 868–878. https://doi.org/10.18653/v1/D15-1103. Asif Ekbal, Eva Sourjikova, Anette Frank, and Simone Paolo Ponzetto. 2010. Assessing the challenge of fine-grained named entity recognition and classification. In Proceedings of the 2010 Named Entities Workshop. Association for Computational Linguistics, pages 93–101. http://aclweb.org/anthology/W10-2415. Michael Fleischman and Eduard Hovy. 2002. Fine grained classification of named entities. In COLING 2002: The 19th International Conference on Computational Linguistics. http://aclweb.org/anthology/C02-1130. Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014. Contextdependent fine-grained entity type tagging. CoRR abs/1412.1820. http://arxiv.org/abs/1412.1820. Claudio Giuliano. 2009. Fine-grained classification of named entities exploiting latent semantic kernels. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL2009). Association for Computational Linguistics, pages 201–209. http://aclweb.org/anthology/W091125. Stig-Arne Gr¨onroos, Sami Virpioja, Peter Smit, and Mikko Kurimo. 2014. Morfessor flatcat: An hmm-based method for unsupervised and semisupervised learning of morphology. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics, pages 1177–1185. http://aclweb.org/anthology/C14-1111. 1955 Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Revisiting embedding features for simple semi-supervised learning. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 110–120. https://doi.org/10.3115/v1/D14-1012. Alexander Hogue, Joel Nothman, and James R. Curran. 2014. Unsupervised biographical event extraction using wikipedia traffic. In Proceedings of the Australasian Language Technology Association Workshop 2014. pages 41–49. http://aclweb.org/anthology/U14-1006. Heng Ji and Ralph Grishman. 2006. Analysis and repair of name tagger errors. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions. Association for Computational Linguistics, pages 420–427. http://aclweb.org/anthology/P062055. Heng Ji, Joel Nothman, and Hoa Trang Dang. 2016. Overview of tac-kbp2016 tri-lingual edl and its impact on end-to-end kbp. In Proceedings of the Text Analysis Conference. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, Volume 29, Number 1, March 2003 http://aclweb.org/anthology/J03-1002. Jun’ichi Kazama and Kentaro Torisawa. 2007. Exploiting wikipedia as external knowledge for named entity recognition. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL). http://aclweb.org/anthology/D07-1073. Sungchul Kim, Kristina Toutanova, and Hwanjo Yu. 2012. Multilingual named entity recognition using parallel data and metadata from wikipedia. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 694–702. http://aclweb.org/anthology/P121073. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 260–270. https://doi.org/10.18653/v1/N16-1030. Hao Li, Heng Ji, Hongbo Deng, and Jiawei Han. 2011. Exploiting background information networks to enhance bilingual event extraction through topic modeling. In Proceedings of International Conference on Advances in Information Mining and Management (IMMM2011). Qi Li, Haibo Li, Heng Ji, Wen Wang, Jing Zheng, and Fei Huang. 2012. Joint bilingual name tagging for parallel corpora. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management. ACM, New York, NY, USA, CIKM ’12, pages 1727–1731. https://doi.org/10.1145/2396761.2398506. Xiao Ling and Daniel S. Weld. 2012. Fine-grained entity recognition. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI’12, pages 94–100. Patrick Littell, Kartik Goyal, R. David Mortensen, Alexa Little, Chris Dyer, and Lori Levin. 2016. Named entity recognition for linguistic rapid response in low-resource languages: Sorani kurdish and tajik. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 998–1006. http://aclweb.org/anthology/C16-1095. Farzaneh Mahdisoltani, Joanna Biega, and Fabian M. Suchanek. 2015. Yago3: A knowledge base from multilingual wikipedias. In Proceedings of the Conference on Innovative Data Systems Research. Alireza Mahmoudi, Mohsen Arabsorkhi, and Heshaam Faili. 2013. Supervised morphology generation using parallel corpus. In Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013. INCOMA Ltd. Shoumen, BULGARIA, pages 408– 414. http://aclweb.org/anthology/R13-1053. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, pages 55–60. https://doi.org/10.3115/v1/P14-5010. James Mayfield, Dawn Lawrie, Paul McNamee, and Douglas W. Oard. 2011. Building a cross-language entity linking collection in twenty-one languages. In Multilingual and Multimodal Information Access Evaluation: Second International Conference of the Cross-Language Evaluation Forum. Paul McNamee, James Mayfield, Dawn Lawrie, Douglas Oard, and David Doermann. 2011. Cross-language entity linking. In Proceedings of 5th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, pages 255–263. http://aclweb.org/anthology/I11-1029. Peter Mika, Massimiliano Ciaramita, Hugo Zaragoza, and Jordi Atserias. 2008. Learning to tag and tagging to learn: A case study on wikipedia. IEEE Intelligent Systems . 1956 Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, pages 1003– 1011. http://aclweb.org/anthology/P09-1113. Ndapandula Nakashole, Tomasz Tylenda, and Gerhard Weikum. 2013. Fine-grained semantic typing of emerging entities. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1488–1497. http://aclweb.org/anthology/P13-1146. Joel Nothman, R. James Curran, and Tara Murphy. 2008. Transforming wikipedia into named entity training data. In Proceedings of the Australasian Language Technology Association Workshop 2008. pages 124–132. http://aclweb.org/anthology/U081016. Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R. Curran. 2013. Learning multilingual named entity recognition from wikipedia. Artificial Intelligence 194:151–175. https://doi.org/10.1016/j.artint.2012.03.006. Xiaoman Pan, Taylor Cassidy, Ulf Hermjakob, Heng Ji, and Kevin Knight. 2015. Unsupervised entity linking with abstract meaning representation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 1130– 1139. https://doi.org/10.3115/v1/N15-1119. Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, wordnet and wikipedia for coreference resolution. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference. http://aclweb.org/anthology/N06-1025. Xiang Ren, Ahmed El-Kishky, Chi Wang, Fangbo Tao, Clare R. Voss, and Jiawei Han. 2015. Clustype: Effective entity recognition and typing by relation phrase-based clustering. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, KDD ’15, pages 995–1004. https://doi.org/10.1145/2783258.2783362. E. Alexander Richman and Patrick Schone. 2008. Mining wiki resources for multilingual named entity recognition. In Proceedings of ACL-08: HLT. Association for Computational Linguistics, pages 1–9. http://aclweb.org/anthology/P08-1001. Nicky Ringland, Joel Nothman, Tara Murphy, and R. James Curran. 2009. Classifying articles in english and german wikipedia. In Proceedings of the Australasian Language Technology Association Workshop 2009. pages 20–28. http://aclweb.org/anthology/U09-1004. Ryan Roth, Owen Rambow, Nizar Habash, Mona Diab, and Cynthia Rudin. 2008. Arabic morphological tagging, diacritization, and lemmatization using lexeme models and feature ranking. In Proceedings of ACL-08: HLT, Short Papers. Association for Computational Linguistics, pages 117–120. http://aclweb.org/anthology/P08-2030. Teemu Ruokolainen, Oskar Kohonen, Kairit Sirts, StigArne Gr¨onroos, Mikko Kurimo, and Sami Virpioja. 2016. A comparative study of minimally supervised morphological segmentation. Computational Linguistics . Avirup Sil and Radu Florian. 2016. One for all: Towards language independent named entity linking. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 2255–2264. https://doi.org/10.18653/v1/P16-1213. Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-lingual named entity recognition via wikification. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Association for Computational Linguistics, pages 219–228. https://doi.org/10.18653/v1/K16-1022. Mengqiu Wang, Wanxiang Che, and D. Christopher Manning. 2013. Joint word alignment and bilingual named entity recognition using dual decomposition. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1073–1082. http://aclweb.org/anthology/P13-1106. Mengqiu Wang and D. Christopher Manning. 2014. Cross-lingual projected expectation regularization for weakly supervised learning. Transactions of the Association of Computational Linguistics 2:55–66. http://aclweb.org/anthology/Q14-1005. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms mutual information, and lexicography. Computational Linguistics, Volume 16, Number 1, March 1990 http://aclweb.org/anthology/J901003. Dani Yogatama, Daniel Gillick, and Nevena Lazic. 2015. Embedding methods for fine grained entity type classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, pages 291–296. https://doi.org/10.3115/v1/P15-2048. 1957 Amir Mohamed Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, and Gerhard Weikum. 2012. Hyena: Hierarchical type classification for entity names. In Proceedings of COLING 2012: Posters. The COLING 2012 Organizing Committee, pages 1361–1370. http://aclweb.org/anthology/C12-2133. Boliang Zhang, Xiaoman Pan, Tianlu Wang, Ashish Vaswani, Heng Ji, Kevin Knight, and Daniel Marcu. 2016a. Name tagging for low-resource incident languages based on expectation-driven learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 249–259. https://doi.org/10.18653/v1/N16-1029. Dongxu Zhang, Boliang Zhang, Xiaoman Pan, Xiaocheng Feng, Heng Ji, and Weiran XU. 2016b. Bitext name tagging for cross-lingual entity annotation projection. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 461–470. http://aclweb.org/anthology/C16-1045. 1958
2017
178
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1959–1970 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1179 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1959–1970 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1179 Adversarial Training for Unsupervised Bilingual Lexicon Induction Meng Zhang†‡ Yang Liu†‡∗Huanbo Luan† Maosong Sun†‡ †State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing, China ‡Jiangsu Collaborative Innovation Center for Language Competence, Jiangsu, China [email protected], [email protected] [email protected], [email protected] Abstract Word embeddings are well known to capture linguistic regularities of the language on which they are trained. Researchers also observe that these regularities can transfer across languages. However, previous endeavors to connect separate monolingual word embeddings typically require cross-lingual signals as supervision, either in the form of parallel corpus or seed lexicon. In this work, we show that such cross-lingual connection can actually be established without any form of supervision. We achieve this end by formulating the problem as a natural adversarial game, and investigating techniques that are crucial to successful training. We carry out evaluation on the unsupervised bilingual lexicon induction task. Even though this task appears intrinsically cross-lingual, we are able to demonstrate encouraging performance without any cross-lingual clues. 1 Introduction As word is the basic unit of a language, the betterment of its representation has significant impact on various natural language processing tasks. Continuous word representations, commonly known as word embeddings, have formed the basis for numerous neural network models since their advent. Their popularity results from the performance boost they bring, which should in turn be attributed to the linguistic regularities they capture (Mikolov et al., 2013b). Soon following the success on monolingual tasks, the potential of word embeddings for crosslingual natural language processing has attracted much attention. In their pioneering work, Mikolov ∗Corresponding author. caballo (horse) cerdo (pig) gato (cat) horse pig cat Spanish English Figure 1: Illustrative monolingual word embeddings of Spanish and English, adapted from (Mikolov et al., 2013a). Although trained independently, the two sets of embeddings exhibit approximate isomorphism. et al. (2013a) observe that word embeddings trained separately on monolingual corpora exhibit isomorphic structure across languages, as illustrated in Figure 1. This interesting finding is in line with research on human cognition (Youn et al., 2016). It also means a linear transformation may be established to connect word embedding spaces, allowing word feature transfer. This has far-reaching implication on low-resource scenarios (Daum´e III and Jagarlamudi, 2011; Irvine and Callison-Burch, 2013), because word embeddings only require plain text to train, which is the most abundant form of linguistic resource. However, connecting separate word embedding spaces typically requires supervision from crosslingual signals. For example, Mikolov et al. (2013a) use five thousand seed word translation pairs to train the linear transformation. In a recent study, Vuli´c and Korhonen (2016) show that at least hundreds of seed word translation pairs are needed for the model to generalize. This is unfortunate for low-resource languages and domains, 1959 (b) D1 0/1 G GT D2 1/0 (a) D 0/1 G (c) D 0/1 G GT LR ~ Figure 2: (a) The unidirectional transformation model directly inspired by the adversarial game: The generator G tries to transform source word embeddings (squares) to make them seem like target ones (dots), while the discriminator D tries to classify whether the input embeddings are generated by G or real samples from the target embedding distribution. (b) The bidirectional transformation model. Two generators with tied weights perform transformation between languages. Two separate discriminators are responsible for each language. (c) The adversarial autoencoder model. The generator aims to make the transformed embeddings not only indistinguishable by the discriminator, but also recoverable as measured by the reconstruction loss LR. because data encoding cross-lingual equivalence is often expensive to obtain. In this work, we aim to entirely eliminate the need for cross-lingual supervision. Our approach draws inspiration from recent advances in generative adversarial networks (Goodfellow et al., 2014). We first formulate our task in a fashion that naturally admits an adversarial game. Then we propose three models that implement the game, and explore techniques to ensure the success of training. Finally, our evaluation on the bilingual lexicon induction task reveals encouraging performance, even though this task appears formidable without any cross-lingual supervision. 2 Models In order to induce a bilingual lexicon, we start from two sets of monolingual word embeddings with dimensionality d. They are trained separately on two languages. Our goal is to learn a mapping function f : Rd →Rd so that for a source word embedding x, f (x) lies close to the embedding of its target language translation y. The learned mapping function can then be used to translate each source word x by finding the nearest target embedding to f (x). We consider x to be drawn from a distribution px, and similarly y ∼py. The key intuition here is to find the mapping function to make f (x) seem to follow the distribution py, for all x ∼px. From this point of view, we design an adversarial game as illustrated in Figure 2(a): The generator G implements the mapping function f, trying to make f (x) passable as target word embeddings, while the discriminator D is a binary classifier striving to distinguish between fake target word embeddings f (x) ∼pf(x) and real ones y ∼py. This intuition can be formalized as the minimax game minG maxD V (D, G) with value function V (D, G) =Ey∼py [log D (y)] + Ex∼px [log (1 −D (G (x)))] . (1) Theoretical analysis reveals that adversarial training tries to minimize the Jensen-Shannon divergence JSD py||pf(x)  (Goodfellow et al., 2014). Importantly, the minimization happens at the distribution level, without requiring word 1960 translation pairs to supervise training. 2.1 Model 1: Unidirectional Transformation The first model directly implements the adversarial game, as shown in Figure 2(a). As hinted by the isomorphism shown in Figure 1, previous works typically choose the mapping function f to be a linear map (Mikolov et al., 2013a; Dinu et al., 2015; Lazaridou et al., 2015). We therefore parametrize the generator as a transformation matrix G ∈Rd×d. We also tried non-linear maps parametrized by neural networks, without success. In fact, if the generator is given sufficient capacity, it can in principle learn a constant mapping function to a target word embedding, which makes the discriminator impossible to distinguish, much like the “mode collapse” problem widely observed in the image domain (Radford et al., 2015; Salimans et al., 2016). We therefore believe it is crucial to grant the generator with suitable capacity. As a generic binary classifier, a standard feedforward neural network with one hidden layer is used to parametrize the discriminator D, and its loss function is the usual cross-entropy loss, as in the value function (1): LD = −log D (y) −log (1 −D (Gx)) . (2) For simplicity, here we write the loss with a minibatch size of 1; in our experiments we use 128. The generator loss is given by LG = −log D (Gx) . (3) In line with previous work (Goodfellow et al., 2014), we find this loss easier to minimize than the original form log (1 −D (Gx)). Orthogonal Constraint The above model is very difficult to train. One possible reason is that the parameter search space Rd×d for the generator may still be too large. Previous works have attempted to constrain the transformation matrix to be orthogonal (Xing et al., 2015; Zhang et al., 2016b; Artetxe et al., 2016). An orthogonal transformation is also theoretically appealing for its self-consistency (Smith et al., 2017) and numerical stability. However, using constrained optimization for our purpose is cumbersome, so we opt for an orthogonal parametrization (Mhammedi et al., 2016) of the generator instead. 2.2 Model 2: Bidirectional Transformation The orthogonal parametrization is still quite slow. We can relax the orthogonal constraint and only require the transformation to be self-consistent (Smith et al., 2017): If G transforms the source word embedding space into the target language space, its transpose G⊤should transform the target language space back to the source. This can be implemented by two unidirectional models with a tied generator, as illustrated in Figure 2(b). Two separate discriminators are used, with the same cross-entropy loss as Equation (2) used by Model 1. The generator loss is given by LG = −log D1 (Gx) −log D2  G⊤x  . (4) 2.3 Model 3: Adversarial Autoencoder As another way to relax the orthogonal constraint, we introduce the adversarial autoencoder (Makhzani et al., 2015), depicted in Figure 2(c). After the generator G transforms a source word embedding x into a target language representation Gx, we should be able to reconstruct the source word embedding x by mapping back with G⊤. We therefore introduce the reconstruction loss measured by cosine similarity: LR = −cos  x, G⊤Gx  . (5) Note that this loss will be minimized if G is orthogonal. With this term included, the loss function for the generator becomes LG = −log D (Gx) −λ cos  x, G⊤Gx  , (6) where λ is a hyperparameter that balances the two terms. λ = 0 recovers the unidirectional transformation model, while larger λ should enforce a stricter orthogonal constraint. 3 Training Techniques Generative adversarial networks are notoriously difficult to train, and investigation into stabler training remains a research frontier (Radford et al., 2015; Salimans et al., 2016; Arjovsky and Bottou, 2017). We contribute in this aspect by reporting techniques that are crucial to successful training for our task. 3.1 Regularizing the Discriminator Recently, it has been suggested to inject noise into the input to the discriminator (Sønderby et al., 1961 2016; Arjovsky and Bottou, 2017). The noise is typically additive Gaussian. Here we explore more possibilities, with the following types of noise, injected into the input and hidden layer: • Multiplicative Bernoulli noise (dropout) (Srivastava et al., 2014): ϵ ∼Bernoulli (p). • Additive Gaussian noise: ϵ ∼N 0, σ2 . • Multiplicative Gaussian noise: ϵ ∼ N 1, σ2 . As noise injection is a form of regularization (Bishop, 1995; Van der Maaten et al., 2013; Wager et al., 2013), we also try l2 regularization, and directly restricting the hidden layer size to combat overfitting. Our findings include: • Without regularization, it is not impossible for the optimizer to find a satisfactory parameter configuration, but the hidden layer size has to be tuned carefully. This indicates that a balance of capacity between the generator and discriminator is needed. • All forms of regularization help training by allowing us to liberally set the hidden layer size to a relatively large value. • Among the types of regularization, multiplicative Gaussian injected into the input is the most effective, and additive Gaussian is similar. On top of input noise, hidden layer noise helps slightly. In the following experiments, we inject multiplicative Gaussian into the input and hidden layer of the discriminator with σ = 0.5. 3.2 Model Selection From a typical training trajectory shown in Figure 3, we observe that training is not convergent. In fact, simply using the model saved at the end of training gives poor performance. Therefore we need a mechanism to select a good model. We observe there are sharp drops of the generator loss LG, and find they correspond to good models, as the discriminator gets confused at these points with its classification accuracy (D accuracy) dropping simultaneously. Interestingly, the reconstruction loss LR and the value of G⊤G −I F exhibit synchronous drops, even if we use the unidirectional transformation model (λ = 0). This means a good transformation matrix is indeed 0 200000 400000 # minibatches 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 LG, D accuracy, LR 0 2 4 6 8 10 ||G⊤G−I||F ||G⊤G−I||F LG D accuracy LR Figure 3: A typical training trajectory of the adversarial autoencoder model with λ = 1. The values are averages within each minibatch. nearly orthogonal, and justifies our encouragement of G towards orthogonality. With this finding, we can train for sufficient steps and save the model with the lowest generator loss. As we aim to find the cross-lingual transformation without supervision, it would be ideal to determine hyperparameters without a validation set. The sharp drops can also be indicative in this case. If a hyperparameter configuration is poor, those values will oscillate without a clear drop. Although this criterion is somewhat subjective, we find it to be quite feasible in practice. 3.3 Other Training Details Our approach takes monolingual word embeddings as input. We train the CBOW model (Mikolov et al., 2013b) with default hyperparameters in word2vec.1 The embedding dimension d is 50 unless stated otherwise. Before feeding them into our system, we normalize the word embeddings to unit length. When sampling words for adversarial training, we penalize frequent words in a way similar to (Mikolov et al., 2013b). G is 1https://code.google.com/archive/p/word2vec 1962 initialized with a random orthogonal matrix. The hidden layer size of D is 500. Adversarial training involves alternate gradient update of the generator and discriminator, which we implement with a simpler variant algorithm described in (Nowozin et al., 2016). Adam (Kingma and Ba, 2014) is used as the optimizer, with default hyperparameters. For the adversarial autoencoder model, λ = 1 generally works well, but λ = 10 appears stabler for the low-resource Turkish-English setting. 4 Experiments We evaluate the quality of the cross-lingual embedding transformation on the bilingual lexicon induction task. After a source word embedding is transformed into the target space, its M nearest target embeddings (in terms of cosine similarity) are retrieved, and compared against the entry in a ground truth bilingual lexicon. Performance is measured by top-M accuracy (Vuli´c and Moens, 2013): If any of the M translations is found in the ground truth bilingual lexicon, the source word is considered to be handled correctly, and the accuracy is calculated as the percentage of correctly translated source words. We generally report the harshest top-1 accuracy, unless when comparing with published figures in Section 4.4. Baselines Almost all approaches to bilingual lexicon induction from non-parallel data depend on seed lexica. An exception is decipherment (Dou and Knight, 2012; Dou et al., 2015), and we use it as our baseline. The decipherment approach is not based on distributional semantics, but rather views the source language as a cipher for the target language, and attempts to learn a statistical model to decipher the source language. We run the MonoGiza system as recommended by the toolkit.2 It can also utilize monolingual embeddings (Dou et al., 2015); in this case, we use the same embeddings as the input to our approach. Sharing the underlying spirit with our approach, related methods also build upon monolingual word embeddings and find transformation to link different languages. Although they need seed word translation pairs to train and thus not directly comparable, we report their performance with 50 and 100 seeds for reference. These methods are: 2http://www.isi.edu/naturallanguage/software/monogiza release v1.0.tar.gz # tokens vocab. size Wikipedia comparable corpora zh-en zh 21m 3,349 en 53m 5,154 es-en es 61m 4,774 en 95m 6,637 it-en it 73m 8,490 en 93m 6,597 ja-zh ja 38m 6,043 zh 16m 2,814 tr-en tr 6m 7,482 en 28m 13,220 Large-scale settings zh-en zh 143m 14,686 Wikipedia en 1,907m 61,899 zh-en zh 2,148m 45,958 Gigaword en 5,017m 73,504 Table 1: Statistics of the non-parallel corpora. Language codes: zh = Chinese, en = English, es = Spanish, it = Italian, ja = Japanese, tr = Turkish. • Translation matrix (TM) (Mikolov et al., 2013a): the pioneer of this type of methods mentioned in the introduction, using linear transformation. We use a publicly available implementation.3 • Isometric alignment (IA) (Zhang et al., 2016b): an extension of TM by augmenting its learning objective with the isometric (orthogonal) constraint. Although Zhang et al. (2016b) had subsequent steps for their POS tagging task, it could be used for bilingual lexicon induction as well. We ensure the same input embeddings for these methods and ours. The seed word translation pairs are obtained as follows. First, we ask Google Translate4 to translate the source language vocabulary. Then the target translations are queried again and translated back to the source language, and those that do not match the original source words are discarded. This helps to ensure the translation quality. Finally, the translations are discarded if they fall out of our target language vocabulary. 3http://clic.cimec.unitn.it/˜georgiana.dinu/down 4https://translate.google.com 1963 method # seeds accuracy (%) MonoGiza w/o emb. 0 0.05 MonoGiza w/ emb. 0 0.09 TM 50 0.29 100 21.79 IA 50 18.71 100 32.29 Model 1 0 39.25 Model 1 + ortho. 0 28.62 Model 2 0 40.28 Model 3 0 43.31 Table 2: Chinese-English top-1 accuracies of the MonoGiza baseline and our models, along with the translation matrix (TM) and isometric alignment (IA) methods that utilize 50 and 100 seeds. 4.1 Experiments on Chinese-English Data For this set of experiments, the data for training word embeddings comes from Wikipedia comparable corpora.5 Following (Vuli´c and Moens, 2013), we retain only nouns with at least 1,000 occurrences. For the Chinese side, we first use OpenCC6 to normalize characters to be simplified, and then perform Chinese word segmentation and POS tagging with THULAC.7 The preprocessing of the English side involves tokenization, POS tagging, lemmatization, and lowercasing, which we carry out with the NLTK toolkit.8 The statistics of the final training data is given in Table 1, along with the other experimental settings. As the ground truth bilingual lexicon for evaluation, we use Chinese-English Translation Lexicon Version 3.0 (LDC2002L27). Overall Performance Table 2 lists the performance of the MonoGiza baseline and our four variants of adversarial training. MonoGiza obtains low performance, likely due to the harsh evaluation protocol (cf. Section 4.4). Providing it with syntactic information can help (Dou and Knight, 2013), but in a lowresource scenario with zero cross-lingual information, parsers are likely to be inaccurate or even unavailable. 5http://linguatools.org/tools/corpora/wikipediacomparable-corpora 6https://github.com/BYVoid/OpenCC 7http://thulac.thunlp.org 8http://www.nltk.org 城市 小行星 文学 chengshi xiaoxingxing wenxue city asteroid poetry town astronomer literature suburb comet prose area constellation poet proximity orbit writing Table 3: Top-5 English translation candidates proposed by our approach for some Chinese words. The ground truth is marked in bold. 0 500 1000 # seeds 0 10 20 30 Accuracy (%) Ours IA TM Figure 4: Top-1 accuracies of our approach, isometric alignment (IA), and translation matrix (TM), with the number of seeds varying in {50, 100, 200, 500, 1000, 1280}. The unidirectional transformation model attains reasonable accuracy if trained successfully, but it is rather sensitive to hyperparameters and initialization. This training difficulty motivates our orthogonal constraint. But imposing a strict orthogonal constraint hurts performance. It is also about 20 times slower even though we utilize orthogonal parametrization instead of constrained optimization. The last two models represent different relaxations of the orthogonal constraint, and the adversarial autoencoder model achieves the best performance. We therefore use it in our following experiments. Table 3 lists some word translation examples given by the adversarial autoencoder model. Comparison With Seed-Based Methods In this section, we investigate how many seeds TM and IA require to attain the performance level of our approach. There are a total of 1,280 seed translation pairs for Chinese-English, which are removed from the test set during the evaluation for this experiment. We use the most frequent S pairs for TM and IA. Figure 4 shows the accuracies with respect to 1964 method # seeds es-en it-en ja-zh tr-en MonoGiza w/o embeddings 0 0.35 0.30 0.04 0.00 MonoGiza w/ embeddings 0 1.19 0.27 0.23 0.09 TM 50 1.24 0.76 0.35 0.09 100 48.61 37.95 26.67 11.15 IA 50 39.89 27.03 19.04 7.58 100 60.44 46.52 36.35 17.11 Ours 0 71.97 58.60 43.02 17.18 Table 4: Top-1 accuracies (%) of the MonoGiza baseline and our approach on Spanish-English, ItalianEnglish, Japanese-Chinese, and Turkish-English. The results for translation matrix (TM) and isometric alignment (IA) using 50 and 100 seeds are also listed. 50 100 150 200 Embedding dimension 30 40 50 Accuracy (%) Figure 5: Top-1 accuracies of our approach with respect to the input embedding dimensions in {20, 50, 100, 200}. S. When the seeds are few, the seed-based methods exhibit clear performance degradation. In this case, we also observe the importance of the orthogonal constraint from the superiority of IA to TM, which supports our introduction of this constraint as we attempt zero supervision. Finally, in line with the finding in (Vuli´c and Korhonen, 2016), hundreds of seeds are needed for TM to generalize. Only then do seed-based methods catch up with our approach, and the performance difference is marginal even when more seeds are provided. Effect of Embedding Dimension As our approach takes monolingual word embeddings as input, it is conceivable that their quality significantly affects how well the two spaces can be connected by a linear map. We look into this aspect by varying the embedding dimension d in Figure 5. As the dimension increases, the accuracy improves and gradually levels off. This indicates that too low a dimension hampers the encoding of linguistic information drawn from the corpus, and it is advisable to use a sufficiently large dimension. 4.2 Experiments on Other Language Pairs Data We also induce bilingual lexica from Wikipedia comparable corpora for the following language pairs: Spanish-English, Italian-English, JapaneseChinese, and Turkish-English. For SpanishEnglish and Italian-English, we choose to use TreeTagger9 for preprocessing, as in (Vuli´c and Moens, 2013). For the Japanese corpus, we use MeCab10 for word segmentation and POS tagging. For Turkish, we utilize the preprocessing tools (tokenization and POS tagging) provided in LORELEI Language Packs (Strassel and Tracey, 2016), and its English side is preprocessed by NLTK. Unlike the other language pairs, the frequency cutoff threshold for Turkish-English is 100, as the amount of data is relatively small. The ground truth bilingual lexica for SpanishEnglish and Italian-English are obtained from Open Multilingual WordNet11 through NLTK. For Japanese-Chinese, we use an in-house lexicon. For Turkish-English, we build a set of ground truth translation pairs in the same way as how we obtain seed word translation pairs from Google Translate, described above. Results As shown in Table 4, the MonoGiza baseline still does not work well on these language pairs, while our approach achieves much better performance. The accuracies are particularly high for SpanishEnglish and Italian-English, likely because they are closely related languages, and their embedding spaces may exhibit stronger isomorphism. The 9http://www.cis.uni-muenchen.de/˜schmid/tools/ TreeTagger 10http://taku910.github.io/mecab 11http://compling.hss.ntu.edu.sg/omw 1965 method # seeds Wikipedia Gigaword TM 50 0.00 0.01 100 4.79 2.07 IA 50 3.25 1.68 100 7.08 4.18 Ours 0 7.92 2.53 Table 5: Top-1 accuracies (%) of our approach to inducing bilingual lexica for Chinese-English from Wikipedia and Gigaword. Also listed are results for translation matrix (TM) and isometric alignment (IA) using 50 and 100 seeds. performance on Japanese-Chinese is lower, on a comparable level with Chinese-English (cf. Table 2), and these languages are relatively distantly related. Turkish-English represents a low-resource scenario, and therefore the lexical semantic structure may be insufficiently captured by the embeddings. The agglutinative nature of Turkish can also add to the challenge. 4.3 Large-Scale Settings We experiment with large-scale Chinese-English data from two sources: the whole Wikipedia dump and Gigaword (LDC2011T13 and LDC2011T07). We also simplify preprocessing by removing the noun restriction and the lemmatization step (cf. preprocessing decisions for the above experiments). Although large-scale data may benefit the training of embeddings, it poses a greater challenge to bilingual lexicon induction. First, the degree of non-parallelism tends to increase. Second, with cruder preprocessing, the noise in the corpora may take its toll. Finally, but probably most importantly, the vocabularies expand dramatically compared to previous settings (see Table 1). This means a word translation has to be retrieved from a much larger pool of candidates. For these reasons, we consider the performance of our approach presented in Table 5 to be encouraging. The imbalanced sizes of the Chinese and English Wikipedia do not seem to cause a problem for the structural isomorphism needed by our method. MonoGiza does not scale to such large vocabularies, as it already takes days to train in our Italian-English setting. In contrast, our approach is immune from scalability issues by working with embeddings provided by word2vec, which is well known for its fast speed. With the network method 5k 10k MonoGiza w/o embeddings 13.74 7.80 MonoGiza w/ embeddings 17.98 10.56 (Cao et al., 2016) 23.54 17.82 Ours 68.59 51.86 Table 6: Top-5 accuracies (%) of 5k and 10k most frequent words in the French-English setting. The figures for the baselines are taken from (Cao et al., 2016). configuration used in our experiments, the adversarial autoencoder model takes about two hours to train for 500k minibatches on a single CPU. 4.4 Comparison With (Cao et al., 2016) In order to compare with the recent method by Cao et al. (2016), which also uses zero cross-lingual signal to connect monolingual embeddings, we replicate their French-English experiment to test our approach.12 This experimental setting has important differences from the above ones, mostly in the evaluation protocol. Apart from using top-5 accuracy as the evaluation metric, the ground truth bilingual lexicon is obtained by performing word alignment on a parallel corpus. We find this automatically constructed bilingual lexicon to be noisier than the ones we use for the other language pairs; it often lists tens of translations for a source word. This lenient evaluation protocol should explain MonoGiza’s higher numbers in Table 6 than what we report in the other experiments. In this setting, our approach is able to considerably outperform both MonoGiza and the method by Cao et al. (2016). 5 Related Work 5.1 Cross-Lingual Word Embeddings for Bilingual Lexicon Induction Inducing bilingual lexica from non-parallel data is a long-standing cross-lingual task. Except for the decipherment approach, traditional statistical methods all require cross-lingual signals (Rapp, 1999; Koehn and Knight, 2002; Fung and Cheung, 2004; Gaussier et al., 2004; Haghighi et al., 2008; Vuli´c et al., 2011; Vuli´c and Moens, 2013). Recent advances in cross-lingual word embeddings (Vuli´c and Korhonen, 2016; Upadhyay et al., 12As a confirmation, we ran MonoGiza in this setting and obtained comparable performance as reported. 1966 2016) have rekindled interest in bilingual lexicon induction. Like their traditional counterparts, these embedding-based methods require crosslingual signals encoded in parallel data, aligned at document level (Vuli´c and Moens, 2015), sentence level (Zou et al., 2013; Chandar A P et al., 2014; Hermann and Blunsom, 2014; Koˇcisk´y et al., 2014; Gouws et al., 2015; Luong et al., 2015; Coulmance et al., 2015; Oshikiri et al., 2016), or word level (i.e. seed lexicon) (Gouws and Søgaard, 2015; Wick et al., 2016; Duong et al., 2016; Shi et al., 2015; Mikolov et al., 2013a; Dinu et al., 2015; Lazaridou et al., 2015; Faruqui and Dyer, 2014; Lu et al., 2015; Ammar et al., 2016; Zhang et al., 2016a, 2017; Smith et al., 2017). In contrast, our work completely removes the need for cross-lingual signals to connect monolingual word embeddings, trained on non-parallel text corpora. As one of our baselines, the method by Cao et al. (2016) also does not require cross-lingual signals to train bilingual word embeddings. It modifies the objective for training embeddings, whereas our approach uses monolingual embeddings trained beforehand and held fixed. More importantly, its learning mechanism is substantially different from ours. It encourages word embeddings from different languages to lie in the shared semantic space by matching the mean and variance of the hidden states, assumed to follow a Gaussian distribution, which is hard to justify. Our approach does not make any assumptions and directly matches the mapped source embedding distribution with the target distribution by adversarial training. A recent work also attempts adversarial training for cross-lingual embedding transformation (Barone, 2016). The model architectures are similar to ours, but the reported results are not positive. We tried the publicly available code on our data, but the results were not positive, either. Therefore, we attribute the outcome to the difference in the loss and training techniques, but not the model architectures or data. 5.2 Adversarial Training Generative adversarial networks are originally proposed for generating realistic images as an implicit generative model, but the adversarial training technique for matching distributions is generalizable to much more tasks, including natural language processing. For example, Ganin et al. (2016) address domain adaptation by adversarially training features to be domain invariant, and test on sentiment classification. Chen et al. (2016) extend this idea to cross-lingual sentiment classification. Our research deals with unsupervised bilingual lexicon induction based on word embeddings, and therefore works with word embedding distributions, which are more interpretable than the neural feature space of classifiers in the above works. In the field of neural machine translation, a recent work (He et al., 2016) proposes dual learning, which also involves a two-agent game and therefore bears conceptual resemblance to the adversarial training idea. The framework is carried out with reinforcement learning, and thus differs greatly in implementation from adversarial training. 6 Conclusion In this work, we demonstrate the feasibility of connecting word embeddings of different languages without any cross-lingual signal. This is achieved by matching the distributions of the transformed source language embeddings and target ones via adversarial training. The success of our approach signifies the existence of universal lexical semantic structure across languages. Our work also opens up opportunities for the processing of extremely low-resource languages and domains that lack parallel data completely. Our work is likely to benefit from advances in techniques that further stabilize adversarial training. Future work also includes investigating other divergences that adversarial training can minimize (Nowozin et al., 2016), and broader mathematical tools that match distributions (Mohamed and Lakshminarayanan, 2016). Acknowledgments We thank the anonymous reviewers for their helpful comments. This work is supported by the National Natural Science Foundation of China (No. 61522204), the 973 Program (2014CB340501), and the National Natural Science Foundation of China (No. 61331013). This research is also supported by the Singapore National Research Foundation under its International Research Centre@Singapore Funding Initiative and administered by the IDM Programme. 1967 References Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively Multilingual Word Embeddings. arXiv:1602.01925 [cs] http://arxiv.org/abs/ 1602.01925. Martin Arjovsky and L´eon Bottou. 2017. Towards Principled Methods For Training Generative Adversarial Networks. In ICLR. http://arxiv.org/abs/ 1701.04862. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In EMNLP. http://aclanthology.info/ papers/learning-principled-bilingual-mappings-ofword-embeddings-while-preserving-monolingualinvariance. Antonio Valerio Miceli Barone. 2016. Towards crosslingual distributed representations without parallel text trained with adversarial autoencoders. In Proceedings of the 1st Workshop on Representation Learning for NLP. https://doi.org/10.18653/v1/ W16-1614. Chris M. Bishop. 1995. Training with Noise is Equivalent to Tikhonov Regularization. Neural Comput. https://doi.org/10.1162/neco.1995.7.1.108. Hailong Cao, Tiejun Zhao, Shu Zhang, and Yao Meng. 2016. A Distribution-based Model to Learn Bilingual Word Embeddings. In COLING. http://aclanthology.info/papers/a-distributionbased-model-to-learn-bilingual-word-embeddings. Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An Autoencoder Approach to Learning Bilingual Word Representations. In NIPS. http://papers.nips.cc/ paper/5270-an-autoencoder-approach-to-learningbilingual-word-representations.pdf. Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2016. Adversarial Deep Averaging Networks for Cross-Lingual Sentiment Classification. arXiv:1606.01614 [cs] http:// arxiv.org/abs/1606.01614. Jocelyn Coulmance, Jean-Marc Marty, Guillaume Wenzek, and Amine Benhalloum. 2015. Transgram, Fast Cross-lingual Word-embeddings. In EMNLP. http://aclanthology.info/papers/transgram-fast-cross-lingual-word-embeddings. Hal Daum´e III and Jagadeesh Jagarlamudi. 2011. Domain adaptation for machine translation by mining unseen words. In ACL-HLT. http://aclweb.org/ anthology/P11-2071. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving Zero-Shot Learning by Mitigating the Hubness Problem. In ICLR Workshop. http://arxiv.org/abs/1412.6568. Qing Dou and Kevin Knight. 2012. Large scale decipherment for out-of-domain machine translation. In EMNLP-CoNLL. http://aclweb.org/anthology/D121025. Qing Dou and Kevin Knight. 2013. DependencyBased Decipherment for Resource-Limited Machine Translation. In EMNLP. http://aclanthology.info/ papers/dependency-based-decipherment-forresource-limited-machine-translation. Qing Dou, Ashish Vaswani, Kevin Knight, and Chris Dyer. 2015. Unifying Bayesian Inference and Vector Space Models for Improved Decipherment. In ACL-IJCNLP. http://www.aclweb.org/anthology/ P15-1081. Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2016. Learning Crosslingual Word Embeddings without Bilingual Corpora. In EMNLP. http://aclanthology.info/ papers/learning-crosslingual-word-embeddingswithout-bilingual-corpora. Manaal Faruqui and Chris Dyer. 2014. Improving Vector Space Word Representations Using Multilingual Correlation. In EACL. http:/ /aclanthology.info/papers/improving-vectorspace-word-representations-using-multilingualcorrelation. Pascale Fung and Percy Cheung. 2004. Mining VeryNon-Parallel Corpora: Parallel Sentence and Lexicon Extraction via Bootstrapping and EM. In EMNLP. http://aclweb.org/anthology/W04-3208. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-Adversarial Training of Neural Networks. Journal of Machine Learning Research http:/ /jmlr.org/papers/v17/15-239.html. Eric Gaussier, J.M. Renders, I. Matveeva, C. Goutte, and H. Dejean. 2004. A Geometric View on Bilingual Lexicon Extraction from Comparable Corpora. In ACL. https://doi.org/10.3115/1218955.1219022. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In NIPS. http://papers.nips.cc/ paper/5423-generative-adversarial-nets.pdf. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast Bilingual Distributed Representations without Word Alignments. In ICML. http://jmlr.org/proceedings/papers/v37/ gouws15.html. Stephan Gouws and Anders Søgaard. 2015. Simple task-specific bilingual word embeddings. In NAACL-HLT. http://www.aclweb.org/anthology/ N15-1157. 1968 Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning Bilingual Lexicons from Monolingual Corpora. In ACL-HLT. http://aclanthology.info/papers/learning-bilinguallexicons-from-monolingual-corpora. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual Learning for Machine Translation. In NIPS. http://papers.nips.cc/paper/6469-dual-learning-formachine-translation.pdf. Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual Distributed Representations without Word Alignment. In ICLR. http://arxiv.org/abs/ 1312.6173. Ann Irvine and Chris Callison-Burch. 2013. Combining bilingual and comparable corpora for low resource machine translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation. http://aclweb.org/anthology/W13-2233. Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs] http://arxiv.org/abs/ 1412.6980. Philipp Koehn and Kevin Knight. 2002. Learning a Translation Lexicon from Monolingual Corpora. In ACL Workshop on Unsupervised Lexical Acquisition. https://doi.org/10.3115/1118627.1118629. Tom´aˇs Koˇcisk´y, Karl Moritz Hermann, and Phil Blunsom. 2014. Learning Bilingual Word Representations by Marginalizing Alignments. In ACL. http://aclanthology.info/papers/learning-bilingualword-representations-by-marginalizing-alignments. Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. 2015. Hubness and Pollution: Delving into Cross-Space Mapping for Zero-Shot Learning. In ACL-IJCNLP. https://doi.org/10.3115/v1/P151027. Ang Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Deep Multilingual Correlation for Improved Word Embeddings. In NAACL-HLT. http://aclanthology.info/papers/ deep-multilingual-correlation-for-improved-wordembeddings. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual Word Representations with Monolingual Quality in Mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing. http://aclanthology.info/papers/bilingual-wordrepresentations-with-monolingual-quality-in-mind. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. 2015. Adversarial Autoencoders. arXiv:1511.05644 [cs] http:// arxiv.org/abs/1511.05644. Zakaria Mhammedi, Andrew Hellicar, Ashfaqur Rahman, and James Bailey. 2016. Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections. arXiv:1612.00188 [cs] http://arxiv.org/abs/1612.00188. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting Similarities among Languages for Machine Translation. arXiv:1309.4168 [cs] http:// arxiv.org/abs/1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed Representations of Words and Phrases and their Compositionality. In NIPS. http://papers.nips.cc/ paper/5021-distributed-representations-of-wordsand-phrases-and-their-compositionality.pdf. Shakir Mohamed and Balaji Lakshminarayanan. 2016. Learning in Implicit Generative Models. arXiv:1610.03483 [cs, stat] http://arxiv.org/abs/ 1610.03483. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. 2016. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. arXiv:1606.00709 [cs, stat] http://arxiv.org/ abs/1606.00709. Takamasa Oshikiri, Kazuki Fukui, and Hidetoshi Shimodaira. 2016. Cross-Lingual Word Representations via Spectral Graph Embeddings. In ACL. https://doi.org/10.18653/v1/P16-2080. Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv:1511.06434 [cs] http://arxiv.org/abs/ 1511.06434. Reinhard Rapp. 1999. Automatic Identification of Word Translations from Unrelated English and German Corpora. In ACL. https://doi.org/10.3115/ 1034678.1034756. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved Techniques for Training GANs. In NIPS. http://papers.nips.cc/paper/6125-improvedtechniques-for-training-gans.pdf. Tianze Shi, Zhiyuan Liu, Yang Liu, and Maosong Sun. 2015. Learning Cross-lingual Word Embeddings via Matrix Co-factorization. In ACL-IJCNLP. http:/ /aclanthology.info/papers/learning-cross-lingualword-embeddings-via-matrix-co-factorization. Samuel Smith, David Turban, Steven Hamblin, and Nils Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In ICLR. http://arxiv.org/abs/1702.03859. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz´ar. 2016. Amortised MAP Inference for Image Super-resolution. arXiv:1610.04490 [cs, stat] http://arxiv.org/abs/ 1610.04490. 1969 Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research http://www.jmlr.org/papers/v15/ srivastava14a.html. Stephanie Strassel and Jennifer Tracey. 2016. LORELEI Language Packs: Data, Tools, and Resources for Technology Development in Low Resource Languages. In LREC. http://www.lrecconf.org/proceedings/lrec2016/pdf/1138 Paper.pdf. Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual Models of Word Embeddings: An Empirical Comparison. In ACL. http:/ /aclanthology.info/papers/cross-lingual-models-ofword-embeddings-an-empirical-comparison. Laurens Van der Maaten, Minmin Chen, Stephen Tyree, and Kilian Weinberger. 2013. Learning with Marginalized Corrupted Features. In ICML. http://www.jmlr.org/proceedings/papers/ v28/vandermaaten13.html. Ivan Vuli´c and Anna Korhonen. 2016. On the Role of Seed Lexicons in Learning Bilingual Word Embeddings. In ACL. http://aclanthology.info/ papers/on-the-role-of-seed-lexicons-in-learningbilingual-word-embeddings. Ivan Vuli´c and Marie-Francine Moens. 2013. CrossLingual Semantic Similarity of Words as the Similarity of Their Semantic Word Responses. In NAACL-HLT. http://aclanthology.info/papers/ cross-lingual-semantic-similarity-of-words-as-thesimilarity-of-their-semantic-word-responses. Ivan Vuli´c and Marie-Francine Moens. 2015. Bilingual Word Embeddings from Non-Parallel Document-Aligned Data Applied to Bilingual Lexicon Induction. In ACL-IJCNLP. http://aclanthology.info/papers/bilingual-wordembeddings-from-non-parallel-document-aligneddata-applied-to-bilingual-lexicon-induction. Ivan Vuli´c, Wim De Smet, and Marie-Francine Moens. 2011. Identifying Word Translations from Comparable Corpora Using Latent Topic Models. In ACL-HLT. http://aclanthology.info/papers/ identifying-word-translations-from-comparablecorpora-using-latent-topic-models. Stefan Wager, Sida Wang, and Percy S Liang. 2013. Dropout Training as Adaptive Regularization. In NIPS. http://papers.nips.cc/paper/4882-dropouttraining-as-adaptive-regularization.pdf. Michael Wick, Pallika Kanani, and Adam Pocock. 2016. Minimally-Constrained Multilingual Embeddings via Artificial Code-Switching. In AAAI. http://www.aaai.org/Conferences/AAAI/ 2016/Papers/15Wick12464.pdf. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized Word Embedding and Orthogonal Transform for Bilingual Word Translation. In NAACL-HLT. http://aclanthology.info/papers/ normalized-word-embedding-and-orthogonaltransform-for-bilingual-word-translation. Hyejin Youn, Logan Sutton, Eric Smith, Cristopher Moore, Jon F. Wilkins, Ian Maddieson, William Croft, and Tanmoy Bhattacharya. 2016. On the universal structure of human lexical semantics. Proceedings of the National Academy of Sciences https://doi.org/10.1073/pnas.1520752113. Meng Zhang, Yang Liu, Huanbo Luan, Yiqun Liu, and Maosong Sun. 2016a. Inducing Bilingual Lexica From Non-Parallel Data With Earth Mover’s Distance Regularization. In COLING. http://aclanthology.info/papers/inducing-bilinguallexica-from-non-parallel-data-with-earth-mover-sdistance-regularization. Meng Zhang, Haoruo Peng, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Bilingual Lexicon Induction From Non-Parallel Data With Minimal Supervision. In AAAI. http://thunlp.org/˜zm/publications/ aaai2017.pdf. Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016b. Ten Pairs to Tag – Multilingual POS Tagging via Coarse Mapping between Embeddings. In NAACL-HLT. http://aclanthology.info/papers/ten-pairs-to-tagmultilingual-pos-tagging-via-coarse-mappingbetween-embeddings. Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual Word Embeddings for Phrase-Based Machine Translation. In EMNLP. http://aclanthology.info/papers/ bilingual-word-embeddings-for-phrase-basedmachine-translation. 1970
2017
179
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 189–198 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1018 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 189–198 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1018 Gated Self-Matching Networks for Reading Comprehension and Question Answering Wenhui Wang†♮§∗Nan Yang‡§ Furu Wei‡ Baobao Chang†♮ Ming Zhou‡ †Key Laboratory of Computational Linguistics, Peking University, MOE, China ‡Microsoft Research, Beijing, China ♮Collaborative Innovation Center for Language Ability, Xuzhou, 221009, China {wangwenhui,chbb}@pku.edu.cn {nanya,fuwei,mingzhou}@microsoft.com Abstract In this paper, we present the gated selfmatching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer networks to locate the positions of answers from the passages. We conduct extensive experiments on the SQuAD dataset. The single model achieves 71.3% on the evaluation metrics of exact match on the hidden test set, while the ensemble model further boosts the results to 75.9%. At the time of submission of the paper, our model holds the first place on the SQuAD leaderboard for both single and ensemble model. 1 Introduction In this paper, we focus on reading comprehension style question answering which aims to answer questions given a passage or document. We specifically focus on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016), a largescale dataset for reading comprehension and question answering which is manually created through crowdsourcing. SQuAD constrains answers to the space of all possible spans within the reference passage, which is different from cloze-style reading comprehension datasets (Hermann et al., ∗Contribution during internship at Microsoft Research. §Equal contribution. 2015; Hill et al., 2016) in which answers are single words or entities. Moreover, SQuAD requires different forms of logical reasoning to infer the answer (Rajpurkar et al., 2016). Rapid progress has been made since the release of the SQuAD dataset. Wang and Jiang (2016b) build question-aware passage representation with match-LSTM (Wang and Jiang, 2016a), and predict answer boundaries in the passage with pointer networks (Vinyals et al., 2015). Seo et al. (2016) introduce bi-directional attention flow networks to model question-passage pairs at multiple levels of granularity. Xiong et al. (2016) propose dynamic co-attention networks which attend the question and passage simultaneously and iteratively refine answer predictions. Lee et al. (2016) and Yu et al. (2016) predict answers by ranking continuous text spans within passages. Inspired by Wang and Jiang (2016b), we introduce a gated self-matching network, illustrated in Figure 1, an end-to-end neural network model for reading comprehension and question answering. Our model consists of four parts: 1) the recurrent network encoder to build representation for questions and passages separately, 2) the gated matching layer to match the question and passage, 3) the self-matching layer to aggregate information from the whole passage, and 4) the pointernetwork based answer boundary prediction layer. The key contributions of this work are three-fold. First, we propose a gated attention-based recurrent network, which adds an additional gate to the attention-based recurrent networks (Bahdanau et al., 2014; Rockt¨aschel et al., 2015; Wang and Jiang, 2016a), to account for the fact that words in the passage are of different importance to answer a particular question for reading comprehension and question answering. In Wang and Jiang (2016a), words in a passage with their corresponding attention-weighted question context are en189 coded together to produce question-aware passage representation. By introducing a gating mechanism, our gated attention-based recurrent network assigns different levels of importance to passage parts depending on their relevance to the question, masking out irrelevant passage parts and emphasizing the important ones. Second, we introduce a self-matching mechanism, which can effectively aggregate evidence from the whole passage to infer the answer. Through a gated matching layer, the resulting question-aware passage representation effectively encodes question information for each passage word. However, recurrent networks can only memorize limited passage context in practice despite its theoretical capability. One answer candidate is often unaware of the clues in other parts of the passage. To address this problem, we propose a self-matching layer to dynamically refine passage representation with information from the whole passage. Based on question-aware passage representation, we employ gated attention-based recurrent networks on passage against passage itself, aggregating evidence relevant to the current passage word from every word in the passage. A gated attention-based recurrent network layer and self-matching layer dynamically enrich each passage representation with information aggregated from both question and passage, enabling subsequent network to better predict answers. Lastly, the proposed method yields state-of-theart results against strong baselines. Our single model achieves 71.3% exact match accuracy on the hidden SQuAD test set, while the ensemble model further boosts the result to 75.9%. At the time1 of submission of this paper, our model holds the first place on the SQuAD leader board. 2 Task Description For reading comprehension style question answering, a passage P and question Q are given, our task is to predict an answer A to question Q based on information found in P. The SQuAD dataset further constrains answer A to be a continuous subspan of passage P. Answer A often includes nonentities and can be much longer phrases. This setup challenges us to understand and reason about both the question and passage in order to infer the answer. Table 1 shows a simple example from the SQuAD dataset. 1On Feb. 6, 2017 Passage: Tesla later approached Morgan to ask for more funds to build a more powerful transmitter. When asked where all the money had gone, Tesla responded by saying that he was affected by the Panic of 1901, which he (Morgan) had caused. Morgan was shocked by the reminder of his part in the stock market crash and by Tesla’s breach of contract by asking for more funds. Tesla wrote another plea to Morgan, but it was also fruitless. Morgan still owed Tesla money on the original agreement, and Tesla had been facing foreclosure even before construction of the tower began. Question: On what did Tesla blame for the loss of the initial money? Answer: Panic of 1901 Table 1: An example from the SQuAD dataset. 3 Gated Self-Matching Networks Figure 1 gives an overview of the gated selfmatching networks. First, the question and passage are processed by a bi-directional recurrent network (Mikolov et al., 2010) separately. We then match the question and passage with gated attention-based recurrent networks, obtaining question-aware representation for the passage. On top of that, we apply self-matching attention to aggregate evidence from the whole passage and refine the passage representation, which is then fed into the output layer to predict the boundary of the answer span. 3.1 Question and Passage Encoder Consider a question Q = {wQ t }m t=1 and a passage P = {wP t }n t=1. We first convert the words to their respective word-level embeddings ({eQ t }m t=1 and {eP t }n t=1) and character-level embeddings ({cQ t }m t=1 and {cP t }n t=1). The character-level embeddings are generated by taking the final hidden states of a bi-directional recurrent neural network (RNN) applied to embeddings of characters in the token. Such character-level embeddings have been shown to be helpful to deal with out-ofvocab (OOV) tokens. We then use a bi-directional RNN to produce new representation uQ 1 , . . . , uQ m and uP 1 , . . . , uP n of all words in the question and passage respectively: uQ t = BiRNNQ(uQ t−1, [eQ t , cQ t ]) (1) uP t = BiRNNP (uP t−1, [eP t , cP t ]) (2) We choose to use Gated Recurrent Unit (GRU) (Cho et al., 2014) in our experiment since it performs similarly to LSTM (Hochreiter and Schmidhuber, 1997) but is computationally cheaper. 190 𝑢1 𝑄 𝑢2 𝑄 𝑢𝑚 𝑄 Question Attention Question Vector 𝑣1𝑃 𝑣2𝑃 𝑣3𝑃 𝑢1𝑃 𝑢2𝑃 𝑢3𝑃 Passage 𝑣1𝑃 𝑣2𝑃 𝑣3𝑃 𝑣𝑛𝑃 ℎ1𝑃 ℎ2𝑃 ℎ3𝑃 Attention ℎ1𝑎 ℎ2𝑎 Question and Passage GRU Layer Question and Passage Matching Layer Passage Self-Matching Layer Output Layer Start End 𝑢𝑛𝑃 … … 𝑣𝑛𝑃 … ℎ𝑛𝑃 … When was tested The delay in … test … … 𝑟𝑄 Figure 1: Gated Self-Matching Networks structure overview. 3.2 Gated Attention-based Recurrent Networks We propose a gated attention-based recurrent network to incorporate question information into passage representation. It is a variant of attentionbased recurrent networks, with an additional gate to determine the importance of information in the passage regarding a question. Given question and passage representation {uQ t }m t=1 and {uP t }n t=1, Rockt¨aschel et al. (2015) propose generating sentence-pair representation {vP t }n t=1 via soft-alignment of words in the question and passage as follows: vP t = RNN(vP t−1, ct) (3) where ct = att(uQ, [uP t , vP t−1]) is an attentionpooling vector of the whole question (uQ): st j = vTtanh(W Q u uQ j + W P u uP t + W P v vP t−1) at i = exp(st i)/Σm j=1exp(st j) ct = Σm i=1at iuQ i (4) Each passage representation vP t dynamically incorporates aggregated matching information from the whole question. Wang and Jiang (2016a) introduce matchLSTM, which takes uP t as an additional input into the recurrent network: vP t = RNN(vP t−1, [uP t , ct]) (5) To determine the importance of passage parts and attend to the ones relevant to the question, we add another gate to the input ([uP t , ct]) of RNN: gt = sigmoid(Wg[uP t , ct]) [uP t , ct]∗= gt ⊙[uP t , ct] (6) Different from the gates in LSTM or GRU, the additional gate is based on the current passage word and its attention-pooling vector of the question, which focuses on the relation between the question and current passage word. The gate effectively model the phenomenon that only parts of the passage are relevant to the question in reading comprehension and question answering. [uP t , ct]∗ is utilized in subsequent calculations instead of [uP t , ct]. We call this gated attention-based recurrent networks. It can be applied to variants of RNN, such as GRU and LSTM. We also conduct experiments to show the effectiveness of the additional gate on both GRU and LSTM. 3.3 Self-Matching Attention Through gated attention-based recurrent networks, question-aware passage representation {vP t }n t=1 is generated to pinpoint important parts in the passage. One problem with such representation is that it has very limited knowledge of context. One answer candidate is often oblivious to important 191 cues in the passage outside its surrounding window. Moreover, there exists some sort of lexical or syntactic divergence between the question and passage in the majority of SQuAD dataset (Rajpurkar et al., 2016). Passage context is necessary to infer the answer. To address this problem, we propose directly matching the question-aware passage representation against itself. It dynamically collects evidence from the whole passage for words in passage and encodes the evidence relevant to the current passage word and its matching question information into the passage representation hP t : hP t = BiRNN(hP t−1, [vP t , ct]) (7) where ct = att(vP , vP t ) is an attention-pooling vector of the whole passage (vP ): st j = vTtanh(W P v vP j + W ˜P v vP t ) at i = exp(st i)/Σn j=1exp(st j) ct = Σn i=1at ivP i (8) An additional gate as in gated attention-based recurrent networks is applied to [vP t , ct] to adaptively control the input of RNN. Self-matching extracts evidence from the whole passage according to the current passage word and question information. 3.4 Output Layer We follow Wang and Jiang (2016b) and use pointer networks (Vinyals et al., 2015) to predict the start and end position of the answer. In addition, we use an attention-pooling over the question representation to generate the initial hidden vector for the pointer network. Given the passage representation {hP t }n t=1, the attention mechanism is utilized as a pointer to select the start position (p1) and end position (p2) from the passage, which can be formulated as follows: st j = vTtanh(W P h hP j + W a h ha t−1) at i = exp(st i)/Σn j=1exp(st j) pt = arg max(at 1, . . . , at n) (9) Here ha t−1 represents the last hidden state of the answer recurrent network (pointer network). The input of the answer recurrent network is the attention-pooling vector based on current predicted probability at: ct = Σn i=1at ihP i ha t = RNN(ha t−1, ct) (10) When predicting the start position, ha t−1 represents the initial hidden state of the answer recurrent network. We utilize the question vector rQ as the initial state of the answer recurrent network. rQ = att(uQ, V Q r ) is an attention-pooling vector of the question based on the parameter V Q r : sj = vTtanh(W Q u uQ j + W Q v V Q r ) ai = exp(si)/Σm j=1exp(sj) rQ = Σm i=1aiuQ i (11) To train the network, we minimize the sum of the negative log probabilities of the ground truth start and end position by the predicted distributions. 4 Experiment 4.1 Implementation Details We specially focus on the SQuAD dataset to train and evaluate our model, which has garnered a huge attention over the past few months. SQuAD is composed of 100,000+ questions posed by crowd workers on 536 Wikipedia articles. The dataset is randomly partitioned into a training set (80%), a development set (10%), and a test set (10%). The answer to every question is a segment of the corresponding passage. We use the tokenizer from Stanford CoreNLP (Manning et al., 2014) to preprocess each passage and question. The Gated Recurrent Unit (Cho et al., 2014) variant of LSTM is used throughout our model. For word embedding, we use pretrained case-sensitive GloVe embeddings2 (Pennington et al., 2014) for both questions and passages, and it is fixed during training; We use zero vectors to represent all out-of-vocab words. We utilize 1 layer of bi-directional GRU to compute character-level embeddings and 3 layers of bi-directional GRU to encode questions and passages, the gated attention-based recurrent network for question and passage matching is also encoded bidirectionally in our experiment. The hidden vector length is set to 75 for all layers. The hidden size used to compute attention scores is also 75. We also apply dropout (Srivastava et al., 2014) between layers with a dropout rate of 0.2. The model is optimized with AdaDelta (Zeiler, 2012) with an initial learning rate of 1. The ρ and ϵ used in AdaDelta are 0.95 and 1e−6 respectively. 2Downloaded from http://nlp.stanford.edu/ data/glove.840B.300d.zip. 192 Dev Set Test Set Single model EM / F1 EM / F1 LR Baseline (Rajpurkar et al., 2016) 40.0 / 51.0 40.4 / 51.0 Dynamic Chunk Reader (Yu et al., 2016) 62.5 / 71.2 62.5 / 71.0 Match-LSTM with Ans-Ptr (Wang and Jiang, 2016b) 64.1 / 73.9 64.7 / 73.7 Dynamic Coattention Networks (Xiong et al., 2016) 65.4 / 75.6 66.2 / 75.9 RaSoR (Lee et al., 2016) 66.4 / 74.9 - / BiDAF (Seo et al., 2016) 68.0 / 77.3 68.0 / 77.3 jNet (Zhang et al., 2017) - / 68.7 / 77.4 Multi-Perspective Matching (Wang et al., 2016) - / 68.9 / 77.8 FastQA (Weissenborn et al., 2017) - / 68.4 / 77.1 FastQAExt (Weissenborn et al., 2017) - / 70.8 / 78.9 R-NET 71.1 / 79.5 71.3 / 79.7 Ensemble model Fine-Grained Gating (Yang et al., 2016) 62.4 / 73.4 62.5 / 73.3 Match-LSTM with Ans-Ptr (Wang and Jiang, 2016b) 67.6 / 76.8 67.9 / 77.0 RaSoR (Lee et al., 2016) 68.2 / 76.7 - / Dynamic Coattention Networks (Xiong et al., 2016) 70.3 / 79.4 71.6 / 80.4 BiDAF (Seo et al., 2016) 73.3 / 81.1 73.3 / 81.1 Multi-Perspective Matching (Wang et al., 2016) - / 73.8 / 81.3 R-NET 75.6 / 82.8 75.9 / 82.9 Human Performance (Rajpurkar et al., 2016) 80.3 / 90.5 77.0 / 86.8 Table 2: The performance of our gated self-matching networks (R-NET) and competing approaches4. Single Model EM / F1 Gated Self-Matching (GRU) 71.1 / 79.5 -Character embedding 69.6 / 78.6 -Gating 67.9 / 77.1 -Self-Matching 67.6 / 76.7 -Gating, -Self-Matching 65.4 / 74.7 Table 3: Ablation tests of single model on the SQuAD dev set. All the components significantly (t-test, p < 0.05) improve the model. 4.2 Main Results Two metrics are utilized to evaluate model performance: Exact Match (EM) and F1 score. EM measures the percentage of the prediction that matches one of the ground truth answers exactly. F1 measures the overlap between the prediction and ground truth answers which takes the maximum F1 over all of the ground truth answers. The scores on dev set are evaluated by the official script3. Since the test set is hidden, we are required to submit the model to Stanford NLP group to obtain the test scores. Table 2 shows exact match and F1 scores on the 3Downloaded from http://stanford-qa.com Single Model EM / F1 Base model (GRU) 64.5 / 74.1 +Gating 66.2 / 75.8 Base model (LSTM) 64.2 / 73.9 +Gating 66.0 / 75.6 Table 4: Effectiveness of gated attention-based recurrent networks for both GRU and LSTM. dev and test set of our model and competing approaches4. The ensemble model consists of 20 training runs with the identical architecture and hyper-parameters. At test time, we choose the answer with the highest sum of confidence scores amongst the 20 runs for each question. As we can see, our method clearly outperforms the baseline and several strong state-of-the-art systems for both single model and ensembles. 4.3 Ablation Study We do ablation tests on the dev set to analyze the contribution of components of gated self-matching networks. As illustrated in Table 3, the gated 4Extracted from SQuAD leaderboard http: //stanford-qa.com on Feb. 6, 2017. 193 Figure 2: Part of the attention matrices for self-matching. Each row is the attention weights of the whole passage for the current passage word. The darker the color is the higher the weight is. Some key evidence relevant to the question-passage tuple is more encoded into answer candidates. attention-based recurrent network (GARNN) and self-matching attention mechanism positively contribute to the final results of gated self-matching networks. Removing self-matching results in 3.5 point EM drop, which reveals that information in the passage plays an important role. Characterlevel embeddings contribute towards the model’s performance since it can better handle out-ofvocab or rare words. To show the effectiveness of GARNN for variant RNNs, we conduct experiments on the base model (Wang and Jiang, 2016b) of different variant RNNs. The base model match the question and passage via a variant of attentionbased recurrent network (Wang and Jiang, 2016a), and employ pointer networks to predict the answer. Character-level embeddings are not utilized. As shown in Table 4, the gate introduced in question and passage matching layer is helpful for both GRU and LSTM on the SQuAD dataset. 5 Discussion 5.1 Encoding Evidence from Passage To show the ability of the model for encoding evidence from the passage, we draw the alignment of the passage against itself in self-matching. The attention weights are shown in Figure 2, in which the darker the color is the higher the weight is. We can see that key evidence aggregated from the whole passage is more encoded into the answer candidates. For example, the answer “Egg of Columbus” pays more attention to the key information “Tesla”, “device” and the lexical variation word “known” that are relevant to the question-passage tuple. The answer “world classic of epoch-making oratory” mainly focuses on the evidence “Michael Mullet”, “speech” and lexical variation word “considers”. For other words, the attention weights are more evenly distributed between evidence and some irrelevant parts. Selfmatching do adaptively aggregate evidence for words in passage. 5.2 Result Analysis To further analyse the model’s performance, we analyse the F1 score for different question types (Figure 3(a)), different answer lengths (Figure 3(b)), different passage lengths (Figure 3(c)) and different question lengths (Figure 3(d)) of our 194 (a) (b) (c) (d) Figure 3: Model performance on different question types (a), different answer lengths (b), different passage lengths (c), different question lengths (d). The point on the x-axis of figure (c) and (d) represent the datas whose passages length or questions length are between the value of current point and last point. model and its ablation models. As we can see, both four models show the same trend. The questions are split into different groups based on a set of question words we have defined, including “what”, “how”, “who”, “when”, “which”, “where”, and “why”. As we can see, our model is better at “when” and “who” questions, but poorly on “why” questions. This is mainly because the answers to why questions can be very diverse, and they are not restricted to any certain type of phrases. From the Graph 3(b), the performance of our model obviously drops with the increase of answer length. Longer answers are harder to predict. From Graph 3(c) and 3(d), we discover that the performance remains stable with the increase in length, the obvious fluctuation in longer passages and questions is mainly because the proportion is too small. Our model is largely agnostic to long passages and focuses on important part of the passage. 6 Related Work Reading Comprehension and Question Answering Dataset Benchmark datasets play an important role in recent progress in reading comprehension and question answering research. Existing datasets can be classified into two categories according to whether they are manually labeled. Those that are labeled by humans are always in high quality (Richardson et al., 2013; Berant et al., 2014; Yang et al., 2015), but are too small for training modern data-intensive models. Those that are automatically generated from natural occurring data can be very large (Hill et al., 2016; Hermann et al., 2015), which allow the training of more expressive models. However, they are in cloze style, in which the goal is to predict the missing word (often a named entity) in a passage. Moreover, Chen et al. (2016) have shown that the CNN / Daily News dataset (Hermann et al., 2015) requires less reasoning than previously thought, and conclude that performance is almost saturated. Different from above datasets, the SQuAD provides a large and high-quality dataset. The answers in SQuAD often include non-entities and can be much longer phrase, which is more challenging than cloze-style datasets. Moreover, Rajpurkar et al. (2016) show that the dataset retains a diverse set of answers and requires different forms of logical reasoning, including multi-sentence reasoning. MS MARCO (Nguyen et al., 2016) is also a large-scale dataset. The questions in the dataset 195 are real anonymized queries issued through Bing or Cortana and the passages are related web pages. For each question in the dataset, several related passages are provided. However, the answers are human generated, which is different from SQuAD where answers must be a span of the passage. End-to-end Neural Networks for Reading Comprehension Along with cloze-style datasets, several powerful deep learning models (Hermann et al., 2015; Hill et al., 2016; Chen et al., 2016; Kadlec et al., 2016; Sordoni et al., 2016; Cui et al., 2016; Trischler et al., 2016; Dhingra et al., 2016; Shen et al., 2016) have been introduced to solve this problem. Hermann et al. (2015) first introduce attention mechanism into reading comprehension. Hill et al. (2016) propose a windowbased memory network for CBT dataset. Kadlec et al. (2016) introduce pointer networks with one attention step to predict the blanking out entities. Sordoni et al. (2016) propose an iterative alternating attention mechanism to better model the links between question and passage. Trischler et al. (2016) solve cloze-style question answering task by combining an attentive model with a reranking model. Dhingra et al. (2016) propose iteratively selecting important parts of the passage by a multiplying gating function with the question representation. Cui et al. (2016) propose a two-way attention mechanism to encode the passage and question mutually. Shen et al. (2016) propose iteratively inferring the answer with a dynamic number of reasoning steps and is trained with reinforcement learning. Neural network-based models demonstrate the effectiveness on the SQuAD dataset. Wang and Jiang (2016b) combine match-LSTM and pointer networks to produce the boundary of the answer. Xiong et al. (2016) and Seo et al. (2016) employ variant coattention mechanism to match the question and passage mutually. Xiong et al. (2016) propose a dynamic pointer network to iteratively infer the answer. Yu et al. (2016) and Lee et al. (2016) solve SQuAD by ranking continuous text spans within passage. Yang et al. (2016) present a fine-grained gating mechanism to dynamically combine word-level and character-level representation and model the interaction between questions and passages. Wang et al. (2016) propose matching the context of passage with the question from multiple perspectives. Different from the above models, we introduce self-matching attention in our model. It dynamically refines the passage representation by looking over the whole passage and aggregating evidence relevant to the current passage word and question, allowing our model make full use of passage information. Weightedly attending to word context has been proposed in several works. Ling et al. (2015) propose considering window-based contextual words differently depending on the word and its relative position. Cheng et al. (2016) propose a novel LSTM network to encode words in a sentence which considers the relation between the current token being processed and its past tokens in the memory. Parikh et al. (2016) apply this method to encode words in a sentence according to word form and its distance. Since passage information relevant to question is more helpful to infer the answer in reading comprehension, we apply self-matching based on question-aware representation and gated attention-based recurrent networks. It helps our model mainly focus on question-relevant evidence in the passage and dynamically look over the whole passage to aggregate evidence. Another key component of our model is the attention-based recurrent network, which has demonstrated success in a wide range of tasks. Bahdanau et al. (2014) first propose attentionbased recurrent networks to infer word-level alignment when generating the target word. Hermann et al. (2015) introduce word-level attention into reading comprehension to model the interaction between questions and passages. Rockt¨aschel et al. (2015) and Wang and Jiang (2016a) propose determining entailment via word-by-word matching. The gated attention-based recurrent network is a variant of attention-based recurrent network with an additional gate to model the fact that passage parts are of different importance to the particular question for reading comprehension and question answering. 7 Conclusion In this paper, we present gated self-matching networks for reading comprehension and question answering. We introduce the gated attentionbased recurrent networks and self-matching attention mechanism to obtain representation for the question and passage, and then use the pointernetworks to locate answer boundaries. Our model achieves state-of-the-art results on the SQuAD 196 dataset, outperforming several strong competing systems. As for future work, we are applying the gated self-matching networks to other reading comprehension and question answering datasets, such as the MS MARCO dataset (Nguyen et al., 2016). Acknowledgement We thank all the anonymous reviewers for their helpful comments. We thank Pranav Rajpurkar for testing our model on the hidden test dataset. This work is partially supported by National Key Basic Research Program of China under Grant No.2014CB340504 and National Natural Science Foundation of China under Grant No.61273318. The corresponding author of this paper is Baobao Chang. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR . Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D. Manning. 2014. Modeling biological processes for reading comprehension. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1724– 1734. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2016. Attention-overattention neural networks for reading comprehension. CoRR . Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention readers for text comprehension. CoRR . Karl Moritz Hermann, Tom´as Kocisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015. pages 1693–1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children’s books with explicit memory representations. In Proceedings of the International Conference on Learning Representations. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Dipanjan Das. 2016. Learning recurrent span representations for extractive question answering. arXiv preprint arXiv:1611.01436 . Wang Ling, Yulia Tsvetkov, Silvio Amir, Ramon Fermandez, Chris Dyer, Alan W. Black, Isabel Trancoso, and Chu-Cheng Lin. 2015. Not all contexts are created equal: Better word representations with variable attention. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL (System Demonstrations). pages 55–60. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. CoRR abs/1611.09268. 197 Ankur P. Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1532–1543. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. pages 193–203. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´as Kocisk´y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. CoRR . Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 . Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2016. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 colocated with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016.. Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. 2016. Iterative alternating neural attention for machine reading. CoRR abs/1606.02245. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research . Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. 2016. Natural language comprehension with the epireader. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada. pages 2692–2700. Shuohang Wang and Jing Jiang. 2016a. Learning natural language inference with LSTM. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016. Shuohang Wang and Jing Jiang. 2016b. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905 . Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211 . Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Fastqa: A simple and efficient neural architecture for question answering. arXiv preprint arXiv:1703.04816 . Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In Proceedings of EMNLP. Citeseer, pages 2013–2018. Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, and Ruslan Salakhutdinov. 2016. Words or characters? fine-grained gating for reading comprehension. CoRR abs/1611.01724. Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. 2016. End-to-end reading comprehension with dynamic answer chunk ranking. arXiv preprint arXiv:1610.09996 . Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701. Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Dai, and Hui Jiang. 2017. Exploring question understanding and adaptation in neuralnetwork-based question answering. arXiv preprint arXiv:1703.04617 . 198
2017
18
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1971–1982 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1180 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1971–1982 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1180 Estimating Code-Switching on Twitter with a Novel Generalized Word-Level Language Detection Technique Shruti Rijhwani∗ Language Technologies Institute Carnegie Mellon University [email protected] Royal Sequiera∗ University of Waterloo Waterloo, Canada [email protected] Monojit Choudhury Kalika Bali Chandra Sekhar Maddila Microsoft Research Bangalore, India {monojitc,kalikab,chmaddil}@microsoft.com Abstract Word-level language detection is necessary for analyzing code-switched text, where multiple languages could be mixed within a sentence. Existing models are restricted to code-switching between two specific languages and fail in real-world scenarios as text input rarely has a priori information on the languages used. We present a novel unsupervised word-level language detection technique for codeswitched text for an arbitrarily large number of languages, which does not require any manually annotated training data. Our experiments with tweets in seven languages show a 74% relative error reduction in word-level labeling with respect to competitive baselines. We then use this system to conduct a large-scale quantitative analysis of code-switching patterns on Twitter, both global as well as regionspecific, with 58M tweets. 1 Introduction In stable multilingual societies, communication often features fluid alteration between two or more languages – a phenomenon known as code-switching1 (Gumperz, 1982; Myers-Scotton, 1993). It has been studied extensively in linguistics, primarily as a speech phenomenon (Poplack, 1980; Gumperz, 1982; Myers-Scotton, 1993; Milroy and Muysken, 1995; Auer, 2013). However, the growing popularity of computer mediated ∗* This work was done when the authors were affiliated with Microsoft Research. 1This paper uses the terms ‘code-switching’ and ‘codemixing’ interchangeably. communication, particularly social media, has resulted in language data in the text form which exhibits code-switching, among other speechlike characteristics (Crystal, 2001; Herring, 2003; Danet and Herring, 2007; Cardenas-Claros and Isharyanti, 2009). With the large amount of online content generated by multilingual users around the globe, it becomes necessary to design techniques to analyze mixed language, which can help not only in developing end-user applications, but also in conducting fundamental sociolinguistic studies. Language detection (LD) is a prerequisite to several NLP techniques. Most state-of-the-art LD systems detect a single language for an entire document or sentence. Such methods often fail to detect code-switching, which can occur within a sentence. In recent times, there has been some effort to build word-level LD for code-switching between a specific pair of languages (Nguyen and Dogru¨oz, 2013; Elfardy et al., 2013; Solorio et al., 2014; Barman et al., 2014). However, usually user-generated text (e.g., on social media) has no prior information of the languages being used. Further, as several previous social-media based studies on multilingualism have pointed out (Kim et al., 2014; Manley, 2012), lack of general wordlevel LD has been a bottleneck in studying codeswitching patterns in multilingual societies. This paper proposes a novel technique for wordlevel LD that generalizes to an arbitrarily large set of languages. The method does not require a priori information on the specific languages (potentially more than two) being mixed in an input text as long as the languages are from a fixed (arbitrarily large) set. Training is done without any manually annotated data, while achieving accuracies comparable to language-restricted systems trained 1971 with large amounts of labeled data. With a wordlevel LD accuracy of 96.3% on seven languages, this technique enabled us to analyze patterns of code-switching on Twitter, which is the second key contribution of this paper. To the best of our knowledge, this is the first quantitative study of its kind, particularly at such a large-scale. 2 Related Work In this section, we will briefly survey the language detection techniques (see Hughes et al. (2006) and Garg et al. (2014) for comprehensive surveys), and sociolinguistic studies on multilingualism (see Nguyen et al. (2016) for a detailed survey) that were enabled by these techniques. Early work on LD (Cavnar and Trenkle, 1994; Dunning, 1994) focused on detecting a single language for an entire document. These obtained high accuracies on well-formed text (e.g., news articles), which led to LD being considered solved (McNamee, 2005). However, there has been renewed interest with the amount of user-generated content on the web. Such text poses unique challenges such as short length, misspelling, idiomatic expressions and acronyms (Carter et al., 2013; Goldszmidt et al., 2013). Xia et al. (2009), Tromp and Pechenizkiy (2011) and Lui and Baldwin (2012) created LD systems for monolingual sentences, web pages and tweets. Zhang et al. (2016) built an unsupervised model to detect the majority language in a document. There has also been document-level LD that assigns multiple language to each document (Prager, 1999; Lui et al., 2014). However, documents were synthetically generated, restricted to inter-sentential language mixing. Also, these models do not fragment the document based on language, making language-specific analysis impossible. Document-level or sentence-level LD does not identify code-switching accurately, which can occur within a sentence. Word-level LD systems attempt to remedy this problem. Most work has been restricted to cases where two languages, known a priori, is to be detected in the input i.e, binary LD at the word-level. There has been work on Dutch-Turkish (Nguyen and Dogru¨oz, 2013), English-Bengali (Das and Gamb¨ack, 2014) and Standard and dialectal Arabic (Elfardy et al., 2013). King and Abney (2013) address wordlevel LD for bilingual documents in 30 language pairs, where the language pair is known a priori. The features for word-level LD proposed by Al-Badrashiny and Diab (2016) are languageindependent, however, at any given time, the model is only trained to tag a specific language pair. There have also been two shared task series on word-level LD: FIRE (Roy et al., 2013; Choudhury et al., 2014; Sequiera et al., 2015) focused on Indian languages and the EMNLP CodeSwitching Workshop (Solorio et al., 2014; Molina et al., 2016). These pairwise LD methods vary from dictionary-based to completely supervised and semi-supervised. None tackle the imminent lack of annotated data required for scaling to more than one language pair. There has been little research on word-level LD that is not restricted to two languages. Hammarstr¨om (2007) proposed a model for multilingual LD for short texts like queries. Gella et al. (2014) designed an algorithm for wordlevel LD across 28 languages. Jurgens et al. (2017) use an encoder-decoder architecture for word-level LD that supports dialectal variation and code-switching. However, these studies experiment with synthetically created multilingual data, constrained either by the number of language switches permitted or to phrase-level codeswitching, and are not equipped to handle the challenges posed by real-world code-switching. Using tweet-level LD systems like the CompactLanguageDetector2, there have been studies on multilingualism in specific cities like London (Manley, 2012) and Manchester (Bailey et al., 2013). These studies, as well as Bergsma et al. (2012), observe that existing LD systems fail on code-switched text. Kim et al. (2014) studied the linguistic behavior of bilingual Twitter users from Qatar, Switzerland and Qu´ebec, and also acknowledge that code-switching could not be studied due to the absence of appropriate LD tools. Using word-level LD for English-Hindi (Gella et al., 2013), Bali et al. 2014 observed that as much as 17% of Indian Facebook posts had codeswitching, and Rudra et al. (2016) showed that the native language is strongly preferred for expressing negative sentiment by English-Hindi bilinguals on Twitter. However, without accurate multilingual word-level LD, there have been no largescale studies on the extent and distribution of code-switching across various communities. 2https://www.npmjs.com/package/cld 1972 3 Generalized Word-level LD We present Generalized Word-Level Language Detection, or GWLD, where: • The number of supported languages can be arbitrarily large • Any number of the supported languages can be mixed within a single input • The languages in the input do not need to be known a priori • Any number of language switches are allowed in the input. • No manual annotation is required for training Formalizing our model, let w = wi=1...n be a natural language text consisting of a sequence of words, w1 to wn. For our current work, we define words to be whitespace-separated tokens (details in Sec 5). Let L = {l1, l2, . . . , lk} be a set of k natural languages. We assume that each wi can be assigned to a unique language lj ∈L. We also define universal tokens like numbers, emoticons, URLs, emails and punctuation, which do not belong to any specific natural language. Certain strings of alphabetic characters representing generic interjections or sounds, such as oh, awww, zzz also fall in this category. For labeling these tokens, we use an auxiliary set of labels, XL = {xl1, xl2, . . . , xlk}. Labeling each universal token with a specific language li (using xli) instead of generically labeling all such tokens xl allows preserving linguistic context when a memoryless model like Hidden Markov Models (HMM) are used for tagging. Further, various NLP tasks on might require the input text, including these universal tokens, to be split by language. For input w, let the output from the LD system be y = yi=1...n, a sequence of labels, where yi ∈ L ∪XL. yi = lj if and only if, in the context of w, wi is a word from lj. If wi is a universal token, yi = xlj, when yi−1 = lj or yi−1 = xlj. If w1 is a universal token, y1 = xlj, where lj is the label of the first token ∈L in the input. Fig. 1 shows a few examples of labeled codeswitched tweets. Named entities (NE) are assigned labels according to the convention used by King and Abney (2013). 4 Method Word-level LD is essentially a sequence labeling task. We use a Hidden Markov Model (HMM), though any other sequence labeling technique, e.g., CRFs, can be used as well. The intuition behind the model architecture is simple – a person who is familiar with k languages can easily recognize (and also understand) the words when any of those languages are codeswitched, even if s/he has never seen any mixed language text before. Analogously, is it possible that monolingual language models, when combined, can identify code-switched text accurately? Imagine we have k HMMs, where the ith HMM has two states li and xli. Each state can label a word. The HMMs are independent, but they are tied to a common start state s and end state e, forming a word-level LD model for monolingual text in one of the k languages. Now, we make transitions from li →lj possible, where i ̸= j. This HMM, shown in Fig. 2, is capable of generating and consequently, labeling code-switched text between any of the k languages. The solid and dotted lines show monolingual transitions and the added code-switching transitions respectively. Fig. 2 depicts three languages, however, the number of languages can be arbitrarily large. Obtaining word-level annotated monolingual and code-switched data is expensive and nearly infeasible for a large number of languages. Instead, we automatically create weakly-labeled monolingual text (set W) and use it to initialize the HMM parameters. We then use Baum-Welch reestimation on unlabeled data (set U) that has monolingual and code-switched text in their natural distribution. Sec. 5 discusses creation of W and U. 4.1 Structure, Initialization and Learning The structure of the HMM shown in Fig. 2 can be formally described using: • Set of states, S = s ∪L ∪XL ∪e • Set of observations, O • Emission matrix (|S| × |O|) • Transition matrix (|S| × |S|) O consists of all seen events in the data, and a special symbol unk for all unseen events. We define an event as a token n-gram and we experimented with n = 1 to 3. It is important to mention that the n-grams do not spread over language states. We also use special start and end symbols, which are observed at states s and e respectively. Elements of O are effectively what the states of the HMM ‘emit’ or generate during decoding. 1973 Ex(1): no\l2 me\l2 lebante\l2 ahorita\l2 cuz\l1 I\l1 felt\l1 como\l2 si\l2 me\l2 kemara\l2 por\l2 dentro\l2 !\xl2 :o\xl2 Then\l1 I\l1 started\l1 getting\l1 all\l1 red\l1 ,\xl1 I\l1 think\l1 im\l1 allergic\l1 a\l2 algo\l2 Ex(2): @XXXXX\xl3 @XXXXX\xl3 :)\xl3 :)\xl3 :)\xl3 :)\xl3 hahahahah\xl3 alles\l3 is\l3 3D\xl3 voor\l3 mama\l4 hatta\l4 4D\xl4 :P\xl4 :P\xl4 :P\xl4 :P\xl4 Havva\l4 &\xl4 Yusuf\l4 olunca\l4 misafir\l4 fln\l4 dinlemez\l4 !!\xl4 Figure 1: Examples of code-switched tweets and the corresponding language labels. l1 = English, l2 = Spanish, l3 = Dutch, l4 = Turkish. Usernames have been anonymized. Figure 2: GWLD Hidden Markov Model. s →xli and li →e transitions omitted for clarity. For any input, the HMM always starts in the state s. The parameters to be learned are the transition and emission matrices. We initialize these matrices using W. The trigram, bigram and unigram word counts from the data for each language in W are used to create language models (LM) with modified Kneser-Ney smoothing (Chen and Goodman, 1999). The emission values for state li are initialized with the respective LM probabilities for all seen n-grams. We also assign a small probability to unk. The emissions for the xli state are initialized using the counts of universal tokens for the language li in W. These are identified using the preprocessing techniques discussed in Sec. 5.1. Possible transitions for each monolingual HMM are li →li, li →xli and xli →li. We do not have the xli →xli transition, because preprocessing (Sec. 5.1) concatenates successive universal tokens into a single token. This does not change the output as the tokens can easily be separated after LD, but is a useful simplification for the model. The transition values for li are initialized by the probability of transitions between words and universal tokens in the text from W. As stated earlier, the model supports codeswitching by the addition of transitions li →lj, and xli →lj, for all i ̸= j. For each state li, there are 2k −2 new transitions (Fig. 2). We initialize these news edges with a small probability π, before normalizing transitions for each state. π, which we call the code-switch probability, is a hyperparameter tuned on a validation set. Starting with the initialized matrices, we reestimate the transition and emission matrices using the EM-like Baum-Welch algorithm (Welch, 2003) over the large set of unlabeled text U. 4.2 Decoding The input to the trained model is first preprocessed as described in Sec. 5.1 (tokenization and identification of universal tokens). The Viterbi algorithm is then used with the HMM parameters to perform word-level LD. When an unknown n-gram, is encountered, its emission probability is estimated by recursively backing off to (n −1)-gram, until we find a known n-gram. If the unigram, i.e., the token, is also unknown, then the observation of the symbol unk is used instead. 5 Dataset Creation The data for both training and testing comes primarily from Twitter because of its public API, and studies have shown the presence of codeswitching in social media (Crystal, 2001; Herring, 2003; Danet and Herring, 2007; Cardenas-Claros and Isharyanti, 2009; Bali et al., 2014). Our experiments use monolingual and codeswitched tweets in seven languages – Dutch (nl), English (en), French (fr), German (de), Portuguese (pt), Spanish (es) and Turkish (tr). These form the set L. The choice of languages is motivated by several factors. First, LD is non-trivial as all these languages use the Latin script. Second, a large volume of tweets are generated in these languages. 1974 Third, there is annotated code-switched data available in nl-tr and en-es, which can be used for validation and testing. Lastly, we know that certain pairs of these languages are code-switched often. 5.1 Collection and Preprocessing Using the Twitter API (Twitter, 2013), we collected tweets over May-July 2015. We selected tweets identified by Twitter LD API (Twitter, 2015) as one of the languages in L. We also removed non-Latin script tweets. As preprocessing, each tweet is first tokenized using ark-twitter (Gimpel et al., 2011) and URLs, hashtags and user mentions are identified using regular expressions. We also identify emoticons, punctuation, digits, special characters, and some universal interjections and abbreviations (such as RT, aww) as universal tokens. We use an existing dictionary (Chittaranjan et al., 2014) for the latter. Let the set of tweets after preprocessing be T . 5.2 Sets W and U We use the COVERSET algorithm (Gella et al., 2014) on each tweet in T . It obtains a confidence score for a word wi belonging to a language lj using a Naive Bayes classifier trained on Wikipedia. These scores are used to find the minimal set of languages are required to label all the input words. If COVERSET detects the tweet as monolingual (i.e., one language can label all words) and the identified language is the same as the Twitter LD label, the tweet is added to the weakly-labeled set W. These tweets are almost certainly monolingual, as COVERSET has very high recall (and low precision) for detecting code-switching. As these are not manually labeled, we call them weaklylabeled. W contains 100K tweets in each language (700K in total). From T , we randomly select 100K tweets in each of the seven languages based on the Twitter LD API labels. These tweets do not have wordlevel language labels and may be code-switched or have an incorrect Twitter language label. We use these as unlabeled data, the set U. 5.3 Validation and Test Sets We curate two word-level gold-standard datasets for validation and testing. These sets contain monolingual tweets in each of the seven languages as well as code-switched tweets from certain language pairs, based on the availability of real-world data. However, it must be noted that GWLD can L1-L2 Tweets L1 Tokens L2 Tokens nl 100 (100) 965 (1099) – fr 100 (102) 1085 (1045) – pt 100 (100) 1080 (967) – de 101 (100) 1078 (890) – tr 100 (100) 939 (879) – es 100 (100) 1067 (1119) – en 100 (100) 1161 (1006) – nl-en 65 (50) 498 (436) 243 (174) fr-en 50 (48) 428 (370) 224 (227) pt-en 53 (53) 463 (513) 278 (242) de-en 49 (50) 417 (459) 293 (292) tr-en 50 (50) 347 (336) 238 (209) es-en 3013 (52) 8510 (355) 16356 (395) nl-tr 735 (728) 5895 (8590) 5293 (8140) Table 1: Test Set Statistics (Validation Set in parentheses). Rows in gray show existing datasets. detect code-switching between more than two languages. The language-wise distribution is shown in Table 1. Including universal tokens, the validation and test set contain 33981 and 58221 tokens respectively. The annotated tweets will be made available for public use. For es-en, we use the word-level annotated test set from the code-switching shared task on language detection (Solorio et al., 2014). We ignore the tokens labeled NE, Ambiguous and Mixed during our system evaluation (Sec. 6), as they do not fall in the scope of this work. The words labeled ‘Other’ were marked as xli where li is en or es, based on the context. We also use existing nltr validation and test sets (Nguyen and Dogru¨oz, 2013), which contain posts from a web forum. For the other language pairs, we created our own validation and test sets, as none already exist. We randomly selected tweets for which COVERSET identified code-switching with high confidence. We gave 215 of these to six annotators for word-level annotation. It is difficult to find annotators who know all seven languages; elaborate guidelines were provided on using online machine translation, dictionaries and search engines for the task. Four out of the six annotators had high inter-annotator agreement – the agreement on L1 (language that the majority of the words in the tweet belong to) was 0.93, L2 (the other language, whenever present) was 0.8 and whether the tweet is code-switched was 0.84. We did not find any instances of code-switching between more than two 1975 Systems Acc L1L2Acc IsMix Dictionary-based Baselines MAXFREQ 0.824 0.752 0.600 MINCOVER 0.853 0.818 0.733 Existing Systems LINGUINI NA 0.529 0.783 LANGID NA 0.830 0.783 POLYGLOT NA 0.521 0.692 GWLD: The Proposed Method Initial 0.838 0.825 0.837 Reestimated 0.963 0.914 0.88 Table 2: Performance of LD Systems on Test Set languages, which is rare in general. We distributed 3000 tweets between the four annotators (monolingual and code-switched tweets from COVERSET). Disagreements were settled between the annotators and a linguist. A subset of the annotated tweets form the validation and test sets (Table 1), and were removed from W and U. 6 Experiments and Results We compare GWLD with three existing systems: LINGUINI (Prager, 1999), LANGID (Lui and Baldwin, 2012), and POLYGLOT (Lui et al., 2014). None of these perform word-level LD, however, LANGID and POLYGLOT return a list of languages with confidence scores for the input. Since codeswitching with more than two languages is absent in our dataset, we consider up to two language labels. We define the tweet to be monolingual if the difference between the confidence values for the top two languages is greater than a parameter δ. Otherwise, it is assumed to be code-switched with the top two languages. δ is tuned independently for the two LD systems on the validation set by maximizing the metric L1L2 Accuracy (Sec. 6.2). Inspired by Gella et al. (2013), we also compare with dictionary-based word-level LD baselines. 6.1 Dictionary-based Baselines For each language, we build a lexicon of all the words and their frequencies found in W for that language. Let the lexicon for language li ∈L be lexi. Let f(lexi, wj) be the frequency of wj in lexi. We define the following baselines: MAXFREQ: For each wj in w, MAXFREQ returns lexi that has the maximum frequency for that token. Therefore, the language label for wj is yj = l[arg maxi f(lexi,wj)]. If the token is not found in any lexicon, yj is assigned the value of yj−1. MINCOVER: We find the smallest subset mincov(w) ⊂L, such that for all wj in input w, we have at least one language li ∈mincov(w) with f(lexi, wj) > 0. If there is no such language, then wj is not considered while computing mincov(w). Once mincov(w) is obtained, labels yi are computed using the MAXFREQ strategy, where the set of languages is restricted to mincov(w) instead of L. Note that mincov(w) need not be unique for w; in such cases, we choose the mincov(w) which maximizes the sum of lexical frequencies based on MAXFREQ labels. 6.2 Metrics We define the Accuracy (Acc) of an LD system as the fraction of words in the test set that are labeled correctly. Since the existing LD systems do not label languages at word-level, we also define: IsMix is the fraction of tweets that are correctly identified as either monolingual or code-mixed. L1L2 Accuracy (L1L2Acc) is the mean accuracy of detecting language(s) at tweet-level. For monolingual tweets, this accuracy is 1 if the gold standard label is detected by the LD system, else 0. For code-switched tweets, the accuracy is 1 if both languages are detected, 0.5 if one language is detected, and 0 otherwise. L1L2Acc is the average over all test set tweets. 6.3 Results We use these metrics to assess performance on the test set for the baselines, existing LD systems and GWLD (Table 2). Initial refers to the HMM model estimated from W and Reestimated refers to the final model after Baum-Welch reestimation. The parameter π is tuned on the validation set using grid search. Reestimated GWLD has the best accuracy of 0.963 and performs significantly better than all the other systems for all metrics. Reesimatation improves the word-level Acc for L1 from 0.89 to 0.97 and for L2 from 0.43 to 0.82. LINGUINI and POLYGLOT likely have low L1L2Acc because they are trained on synthetically-created documents with no word-level code-switching. Since our test set contains pre-existing annotations for en-es (Solorio et al., 2014) and nl-tr (Nguyen and Dogru¨oz, 2013), we compare with state-of-the-art results on those datasets. On en-es tokens, Al-Badrashiny and Diab (2016) reports an F1-score of 0.964; GWLD obtains 0.978. Nguyen and Dogru¨oz (2013) report 0.976 Acc on the nl-tr 1976 (a) (b) Figure 3: Acc versus Dataset Parameters Figure 4: Acc versus Number of Languages test set. We obtain a less competitive 0.936. However, when errors between nl-en are ignored as most of these are en words with nl gold-standard labels (convention followed by the dataset creators), the revised Acc is 0.963. Notably, unlike GWLD, both these models use large amounts of annotated data for training and are restricted to detecting only two languages. Error Analysis: GWLD sometimes detects languages that are not present in the tweet, which account for a sizable fraction (39%) of all word-level errors. Not detecting a language switch causes 8% of the errors. Most other errors are caused by named entities, single-letter tokens, unseen words and the nl-en annotation convention in the test set from Nguyen and Dogru¨oz (2013). 6.4 Robustness of GWLD We test the robustness of GWLD by varying the size of the weakly-labeled set, the unlabeled dataset and the number of languages the model is trained to support. 6.4.1 Size of W and U The variation of Acc with the size of W is shown in Figure 3a. Even with 0.25% of the set (250 L1-L2 Acc IsCM GWLD-Acc nl-en 0.979 0.943 0.967 fr-en 0.982 0.948 0.969 pt-en 0.977 0.952 0.964 de-en 0.984 0.956 0.975 tr-en 0.985 0.984 0.983 es-en 0.954 0.929 0.978 nl-tr 0.975 0.907 0.936 Table 3: Statistics for Pairwise (col. 2 and 3) and GWLD Systems tweets for each li ∈L), the model has accuracy of nearly 0.96. A slow rise in accuracy is observed as the number of tweets in W is increased. We also experiment with varying the size of U. In Figure 3a, we see that with 0.25% of U (around 1,400 randomly sampled tweets), the accuracy on the test set is lower than 0.91. This quickly increases with 10% of U. Thus, GWLD achieves Acc comparable to existing systems with very little weakly-labeled data (just 250 tweets per language, which are easily procurable for most languages) and around 50,000 unlabeled tweets. 6.4.2 Noise in W Since a small, but pure, W gives high accuracy (Sec. 6.4.1), we evaluate how artificially introduced noise affects Acc. The noise introduced into the W of each language comes uniformly from the other six languages. Figure 3b shows how increasing fractions of noise slowly degrades accuracy, with a steep drop to 0.11 accuracy at 90% noise, where the tweets from each incorrect language outnumber the correct language tweets. We test this with a pairwise model as well, as noise from a single language might have greater effect. The accuracy falls to 0.36 at 50% noise (Fig. 3b). At this point, W has an equal number of tweets from each language and is essentially useless. 6.4.3 Number of languages Pairwise Models: Table 3 details two performance metrics (defined in Sec. 5.2) for our model trained on only two languages and the corresponding 7-language GWLD Acc for that language pair. Incremental Addition of Languages: We test Acc while incrementally adding languages to the model in a random order (nl-en-pt-fr-de-es-tr). Figure 4 shows the variation in Acc for nl-en, pten and fr-en as more languages are added to the 1977 Figure 5: Worldwide distribution of monolingual and CS tweets (left and right charts respectively) Figure 6: Worldwide CS point distribution model. Although there is a slight degradation, in absolute terms, the accuracy remains very high. 7 Code-Switching on Twitter The high accuracy and fast processing speed (the current multithreaded implementation labels 2.5M tweets per hour) of GWLD enables us to conduct large-scale and reliable studies of CS patterns on Twitter for the 7 languages. In this paper, we conduct two such studies. The first study analyzes 50M tweets from across the world to understand the extent and broad patterns of switching among these languages. In the second study, we analyze 8M tweets from 24 cities to gain insights into geography-specific CS patterns. 7.1 Worldwide Code-Switching Trends We collected 50 million unique tweets that were identified by the Twitter LD API as one of the 7 languages. We place this constraint to avoid tweets from unsupported languages during analysis. Figure 5 shows the overall language distribution, including the CS language-pair distribution. Approximately 96.5% of the tweets are monolingual, a majority of which are en (74%). Around 3.5% of all tweets are code-switched. Globally, en-es, en-fr and en-pt are the three most commonly mixed pairs accounting for 21.5%, 20.8% and 18.4% of all CS tweets in our data respectively. Interestingly, 85.4% of the CS tweets have en as one of the languages; fr is the next most popularly mixed language, with fr-es (3.2%), fr-pt (1.2%) and fr-nl (0.6%) as the top three observed pairs. Although around 1% of CS tweets were detected as containing more than two languages, these likely have low precision because of language overdetection as discussed in Sec. 6.3. Figure 6 shows the fraction of code-switch points, i.e., how many times the language changes in a CS tweet, for all the languages, as well as for three language pairs with to highlight different trends. Most CS tweets have one CS-point, which implies that the tweet begins with one language, and then ends with another. Such tweets are very frequent for en-de where we observe that usually the tweets state the same fact in both en and de. This so-called translation function (Begum et al., 2016) of CS is probably adopted for reaching out to a wider and global audience. In contrast, es-fr tweets have fewer tweets with single and far more with two CS-point than average. Tweets with two CS-points typically imply the inclusion of a short phrase or chunk from another language. en-tr tweets have the highest number of CS-points, implying rampant and fluid switching between the two languages at all structural levels. 7.2 City-Specific Code-Switching Trends Cosmopolitan cities are melting pots of cultures, which make them excellent locations for studying multilingualism and language interaction, including CS (Bailey et al., 2013). We collected tweets from 24 populous and highly cosmopolitan cities from Europe, North America and South America, where the primarily spoken language is one of the 7 languages detectable by GWLD. Around 8M tweets were collected from these cities. Table 4 shows the top and bottom 6 cities, ranked by the fraction of CS tweets from that city. The total number of tweets analyzed and the top two CS pairs, along with their fractions (of CS tweets from that city) are also reported. More details can be found in the supplementary material. It is interesting to note that the 6 cities with lowest CS tweet fractions have en as the major language, whereas the 6 cities with highest CS fractions are from non-English (Turkish, Spanish and French) speaking geographies. In fact, the Pearson’s cor1978 Cities with highest fraction of CS tweet Cities with lowest fraction of CS tweets City Tweets CS-fraction (CS pairs) City Tweets CS-fraction (CS pairs) Istanbul 351K .12 (en-tr .53, nl-tr .13) Houston 588K .01 (en-es .22, en-fr .21) Qu´ebec City 108K .08 (en-fr .45, es-fr .23) San Francisco 532K .02 (en-es .26, en-fr .19) Paris 158K .07 (en-fr .43, fr-pt .21) NYC 690K .02 (en-es .21, en-fr .19) Mexico City 332K .07 (en-es .54, es-fr .14) Miami 290K .02 (en-es .33, en-pt .20) Brussels 100K .06 (en-fr .37, es-fr .15) London 492K .02 (en-fr .26, en-pt .17) Madrid 147K .06 (en-es .43, es-fr .32) San Diego 432K .02 (en-es .29, en-fr .14) Table 4: Top (left) and bottom (right) six cities according to the fraction of CS tweets. Figure 7: en-es Run Length relation between the fraction of monolingual English tweets and CS tweets for these 24 cities is −0.85. Further, from Table 4 one can also observe that for non-English speaking geographies, the majority language is most commonly mixed with English, followed by French (Spanish, if French is the majority language). Istanbul is an exception, where Dutch is the second most commonly mixed language with Turkish, presumably because of the large Turkish immigrant population in Netherlands resulting in a sizeable TurkishDutch bilingual diaspora (Do˘gru¨oz and Backus, 2009; Nguyen and Dogru¨oz, 2013). Is there a difference in the way speakers mix a pair of languages, say en and es, in en-speaking goegraphies like San Diego, Miami, Houston and New York City, and es-speaking geographies like Madrid, Barcelona, Buenos Aires and Mexico City? Indeed, as shown in Fig. 7, the distribution of the lengths of en and es runs (contiguous sequence of words in a single language beginning and ending with either a CS-point or beginning/end of a tweet) in en-es CS tweets is significantly different in en-speaking and es-speaking geographies. en runs are longer in en-speaking cities and vice versa, showing that the second language is likely used in short phrases. 8 Conclusion and Future Work We present GWLD, a system for word-level language detection for an arbitrarily large set of languages that is completely unsupervised. Our results on monolingual and code-switched tweets in seven Latin script languages show a high 0.963 accuracy, significantly out-performing existing systems. Using GWLD, we conducted a large-scale study of CS trends among these languages, both globally and in specific cities. One of the primary observations of this study is that while code-switching on Twitter is common worldwide (3.5%), it is much more common in non-English speaking cities like Istanbul (12%) where 90% of the population speak Turkish. On the other hand, while a third of the population of Houston speaks Spanish and almost everybody English, only 1% of the tweets from the city are code-switched. All the trends indicate a global dominance of English, which might be because Twitter is primarily a medium for broadcast, and English tweets have a wider audience. Bergsma et al. (2012) show that “[On Twitter] bilinguals bridge between monolinguals with English as a hub, while monolinguals tend not to directly follow each other.” Androutsopoulos (2006) argues that due to linguistic non-homogenity of online public spaces, languages like en, fr and de are typically preferred for communication, even though in private spaces, ”bilingual talk” differs considerably in terms of distribution and CS patterns. As future directions, we plan to extend GWLD to several other languages and conduct similar sociolinguistic studies on CS patterns including not only more languages and geographies, but also other aspects like topic and sentiment. Acknowledgments We would like to thank Prof. Shambavi Pradeep and her students from BMS College of Engineering for assisting with data annotation. We are also grateful to Ashutosh Baheti and Silvana Hartmann from Microsoft Research (Bangalore, India) for help with data organization and error analysis. 1979 References Mohamed Al-Badrashiny and Mona Diab. 2016. Lili: A simple language independent approach for language identification. In Proceedings of the 26th International Conference on Computational Linguistics (COLING). Osaka, Japan. Jannis Androutsopoulos. 2006. Multilingualism, diaspora, and the internet: Codes and identities on german-based diaspora websites. Journal of Sociolinguistics 10(4):520–547. Peter Auer. 2013. Code-switching in conversation: Language, interaction and identity. Routledge. George Bailey, Joseph Goggins, and Thomas Ingham. 2013. What can Twitter tell us about the language diversity of Greater Manchester? In Report by Multilingual Manchester. School of Languages, Linguistics and Cultures at the University of Manchester. http://bit.ly/2kG42Qf. Kalika Bali, Yogarshi Vyas, Jatin Sharma, and Monojit Choudhury. 2014. “I am borrowing ya mixing?” an analysis of English-Hindi code mixing in Facebook. In Proceedings of the First Workshop on Computational Approaches to Code Switching. Utsab Barman, Amitava Das, Joachim Wagner, and Jennifer Foster. 2014. Code mixing: A challenge for language identification in the language of social media. In Proceedings of the First Workshop on Computational Approaches to Code Switching. Rafiya Begum, Kalika Bali, Monojit Choudhury, Koustav Rudra, and Niloy Ganguly. 2016. Functions of code-switching in tweets: An annotation framework and some initial experiments. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC). Shane Bergsma, Paul McNamee, Mossaab Bagdouri, Clayton Fink, and Theresa Wilson. 2012. Language identification for creating language-specific twitter collections. In Proceedings of the second workshop on language in social media. Association for Computational Linguistics. M´onica Stella Cardenas-Claros and Neny Isharyanti. 2009. Code-switching and code-mixing in internet chatting: Between yes, ya, and si a case study. In The JALT CALL Journal, 5. Simon Carter, Wouter Weerkamp, and Manos Tsagkias. 2013. Microblog language identification: Overcoming the limitations of short, unedited and idiomatic text. Language Resources and Evaluation Journal 47:195–215. William B Cavnar and John M Trenkle. 1994. N-grambased text categorization . Stanley F Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech & Language 13(4):359–393. Gokul Chittaranjan, Yogrshi Vyas, Kalika Bali, and Monojit Choudhury. 2014. Word-level language identication using crf : Code-switching shared task report of msr india system. In Proceedings of the First Workshop on Computational Approaches to Code Switching. Monojit Choudhury, Gokul Chittaranjan, Parth Gupta, and Amitava Das. 2014. Overview of FIRE 2014 track on transliterated search . David Crystal. 2001. Language and the Internet. Cambridge University Press. Brenda Danet and Susan Herring. 2007. The Multilingual Internet: Language, Culture, and Communication Online. Oxford University Press., New York. Amitava Das and Bjorn Gamb¨ack. 2014. Identifying languages at the word level in code-mixed indian social media text. In Proceedings of the 11th International Conference on Natural Language Processing. Goa, India, pages 169–178. A Seza Do˘gru¨oz and Ad Backus. 2009. Innovative constructions in dutch turkish: An assessment of ongoing contact-induced change. Bilingualism: language and cognition 12(01):41–63. Ted Dunning. 1994. Statistical identification of language. Computing Research Laboratory, New Mexico State University. Heba Elfardy, Mohamed Al-Badrashiny, and Mona Diab. 2013. Code switch point detection in arabic. In Natural Language Processing and Information Systems, Springer, pages 412–416. Archana Garg, Vishal Gupta, and Manish Jindal. 2014. A survey of language identification techniques and applications. Journal of Emerging Technologies in Web Intelligence 6(4):388–400. Spandana Gella, Kalika Bali, and Monojit Choudhury. 2014. “ye word kis lang ka hai bhai?” testing the limits of word level language identification. In NLPAI. Spandana Gella, Jatin Sharma, and Kalika Bali. 2013. Query word labeling and back transliteration for indian languages: Shared task system description . Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and A. Noah Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL). Moises Goldszmidt, Marc Najork, and Stelios Paparizos. 2013. Boot-strapping language identifiers for short colloquial postings. In Machine Learning and Knowledge Discovery in Databases, volume 8189 of Lecture Notes in Computer Science, pages 95–111. 1980 John. J. Gumperz. 1982. Discourse strategies. Cambridge University Press, Cambridge. Harald Hammarstr¨om. 2007. A fine-grained model for language identification. In In Workshop of Improving Non English Web Searching. Proceedings of iNEWS 2007 Workshop at SIGIR. Susan Herring, editor. 2003. Media and Language Change. Special issue of Journal of Historical Pragmatics 4:1. Baden Hughes, Timothy Baldwin, SG Bird, Jeremy Nicholson, and Andrew MacKinlay. 2006. Reconsidering language identification for written language resources . David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. 2017. Incorporating dialectal variability for socially equitable language identification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Vancouver, Canada. Suin Kim, Ingmar Weber, Li Wei, and Alice Oh. 2014. Sociolinguistic analysis of twitter in multilingual societies. In Proceedings of the 25th ACM conference on Hypertext and social media. Ben King and Steven Abney. 2013. Labeling the languages of words in mixed-language documents using weakly supervised methods. In Proceedings of NAACL-HLT. pages 1110–1119. Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In In Proceedings of the ACL 2012 System Demonstrations. pages 25–30. Marco Lui, Jey Han Lau, and Timothy Baldwin. 2014. Automatic detection and language identification of multilingual documents. In Transactions of the Association for Computational Linguistics. Ed Manley. 2012. Detecting languages in Londons Twittersphere. In Blog post: Urban Movements. http://bit.ly/2kBytHm. P. McNamee. 2005. Language identification: A solved problem suitable for undergraduate instruction. Journal of Computing Sciences in Colleges 20. Lesley Milroy and Pieter Muysken. 1995. One speaker, two languages: Cross-disciplinary perspectives on code-switching. Cambridge University Press. Giovanni Molina, Nicolas Rey-Villamizar, Thamar Solorio, Fahad AlGhamdi, Mahmoud Ghoneim, Abdelati Hawwari, and Mona Diab. 2016. Overview for the second shared task on language identification in code-switched data. EMNLP 2016 page 40. Carol Myers-Scotton. 1993. Dueling Languages: Grammatical Structure in Code-Switching. Claredon, Oxford. Dong Nguyen and A. Seza Dogru¨oz. 2013. Word level language identification in online multilingual communication. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Dong Nguyen, A Seza Do˘gru¨oz, Carolyn P Ros´e, and Franciska de Jong. 2016. Computational sociolinguistics: A survey. Computational Linguistics . Shana Poplack. 1980. Sometimes Ill start a sentence in Spanish y termino en espaol. Linguistics 18:581– 618. John M Prager. 1999. Language identification for multilingual documents. In Systems Sciences, 1999. HICSS-32. Proceedings of the 32nd Annual Hawaii International Conference. Rishiraj Saha Roy, Monojit Choudhury, Prasenjit Majumder, and Komal Agarwal. 2013. Overview and datasets of FIRE 2013 track on transliterated search. In Working Notes of FIRE. Koustav Rudra, Shruti Rijhwani, Rafiya Begum, Kalika Bali, Monojit Choudhury, and Niloy Ganguly. 2016. Understanding language preference for expression of opinion and sentiment: What do Hindi-English speakers do on Twitter? In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Royal Sequiera, Monojit Choudhury, Parth Gupta, Paolo Rosso, Shubham Kumar, Somnath Banerjee, Sudip Kumar Naskar, Sivaji Bandyopadhyay, Gokul Chittaranjan, Amitava Das, and Kunal Chakma. 2015. Overview of fire-2015 shared task on mixed script information retrieval. In Working Notes of FIRE. Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Gohneim, Abdelati Hawwari, Fahad AlGhamdi, Julia Hirschberg, Alison Chang, et al. 2014. Overview for the first shared task on language identification in code-switched data. Proceedings of The First Workshop on Computational Approaches to Code Switching . Erik Tromp and Mykola Pechenizkiy. 2011. Graphbased n-gram language identification on short texts. In In Proc. 20th Machine Learning conference of Belgium and The Netherlands. pages 27–34. Twitter. 2013. GET statuses/sample — Twitter Developers. https://dev.twitter.com/docs/api/1/get/statuses/sample. Twitter. 2015. GET help/languages — Twitter Developers. https://dev.twitter.com/rest/reference/get/help/languages. Lloyd R Welch. 2003. Hidden markov models and the baum-welch algorithm. IEEE Information Theory Society Newsletter 53(4):10–13. 1981 Fei Xia, William D Lewis, and Hoifung Poon. 2009. Language id in the context of harvesting language data off the web. In In Proceedings of the 12th EACL. pages 870–878. Wei Zhang, Robert AJ Clark, Yongyuan Wang, and Wen Li. 2016. Unsupervised language identification based on latent dirichlet allocation. Computer Speech & Language 39:47–66. 1982
2017
180
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1983–1992 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1181 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1983–1992 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1181 Using Global Constraints and Reranking to Improve Cognates Detection Michael Bloodgood Department of Computer Science The College of New Jersey Ewing, NJ 08628 [email protected] Benjamin Strauss Computer Science and Engineering Dept. The Ohio State University Columbus, OH 43210 [email protected] Abstract Global constraints and reranking have not been used in cognates detection research to date. We propose methods for using global constraints by performing rescoring of the score matrices produced by state of the art cognates detection systems. Using global constraints to perform rescoring is complementary to state of the art methods for performing cognates detection and results in significant performance improvements beyond current state of the art performance on publicly available datasets with different language pairs and various conditions such as different levels of baseline state of the art performance and different data size conditions, including with more realistic large data size conditions than have been evaluated with in the past. 1 Introduction This paper presents an effective method for using global constraints to improve performance for cognates detection. Cognates detection is the task of identifying words across languages that have a common origin. Automatic cognates detection is important to linguists because cognates are needed to determine how languages evolved. Cognates are used for protolanguage reconstruction (Hall and Klein, 2011; Bouchard-Cˆot´e et al., 2013). Cognates are important for cross-language dictionary look-up and can also improve the quality of machine translation, word alignment, and bilingual lexicon induction (Simard et al., 1993; Kondrak et al., 2003). A word is traditionally only considered cognate with another if both words proceed from the same ancestor. Nonetheless, in line with the conventions of previous research in computational linguistics, we set a broader definition. We use the word ‘cognate’ to denote, as in (Kondrak, 2001): “...words in different languages that are similar in form and meaning, without making a distinction between borrowed and genetically related words; for example, English ‘sprint’ and the Japanese borrowing ‘supurinto’ are considered cognate, even though these two languages are unrelated.” These broader criteria are motivated by the ways scientists develop and use cognate identification algorithms in natural language processing (NLP) systems. For cross-lingual applications, the advantage of such technology is the ability to identify words for which similarity in meaning can be accurately inferred from similarity in form; it does not matter if the similarity in form is from strict genetic relationship or later borrowing (Mericli and Bloodgood, 2012). Cognates detection has received a lot of attention in the literature. The research of the use of statistical learning methods to build systems that can automatically perform cognates detection has yielded many interesting and creative approaches for gaining traction on this challenging task. Currently, the highest-performing state of the art systems detect cognates based on the combination of multiple sources of information. Some of the most indicative sources of information discovered to date are word context information, phonetic information, word frequency information, temporal information in the form of word frequency distributions across parallel time periods, and word burstiness information. See section 3 for fuller explanations of each of these sources of information that state of the art systems currently use. Scores for all pairs of words from language L1 x language L2 are generated by generating component scores based on these sources of information and then combining them in an appropriate manner. Simple methods of combination are giving equal weight1983 ing for each score, while state of the art performance is obtained by learning an optimal set of weights from a small seed set of known cognates. Once the full matrix of scores is generated, the word pairs with the highest scores are predicted as being cognates. The methods we propose in the current paper consume as input the final score matrix that state of the art methods create. We test if our methods can improve performance by generating new rescored matrices by rescoring all of the pairs of words by taking into account global constraints that apply to cognates detection. Thus, our methods are complementary to previous methods for creating cognates detection systems. Using global constraints and performing rescoring to improve cognates detection has not been explored yet. We find that rescoring based on global constraints improves performance significantly beyond current state of the art levels. The cognates detection task is an interesting task to apply our methods to for a few reasons: • It’s a challenging unsolved task where ongoing research is frequently reported in the literature trying to improve performance; • There is significant room for improvement in performance; • It has a global structure in its output classifications since if a word lemma1 wi from language L1 is cognate with a word lemma wj from language L2, then wi is not cognate with any other word lemma from L2 different from wj and wj is not cognate with any other word lemma wk from L1. • There are multiple standard datasets freely and publicly available that have been worked on with which to compare results. • Different datasets and language pairs yield initial score matrices with very different qualities. Some of the score matrices built using the existing state of the art best approaches yield performance that is quite low (11-point interpolated average precision of only approximately 16%) while some of these score 1A lemma is a base form of a word. For example, in English the words ‘baked’ and ‘baking’ would both map to the lemma ‘bake’. Lemmatizing software exists for many languages and lemmatization is a standard preprocessing task conducted before cognates detection. matrices for other language pairs and data sets have state of the art score matrices that are already able to achieve 11-point interpolated average precision of 57%. Although we are not aware of work using global constraints to perform rescoring to improve cognates detection, there are related methodologies for reranking in different settings. Methodologically related work includes past work in structured prediction and reranking (Collins, 2002; Collins and Roark, 2004; Collins and Koo, 2005; Taskar et al., 2005a,b). Note that in these past works, there are many instances with structured outputs that can be used as training data to learn a structured prediction model. For example, a seminal application in the past was using online training with structured perceptrons to learn improved systems for performing various syntactic analyses and tagging of sentences such as POS tagging and base noun phrase chunking (Collins, 2002). Note that in those settings the unit at which there are structural constraints is a sentence. Also note that there are many sentences available so that online training methods such as discriminative training of structured perceptrons can be used to learn structured predictors effectively in those settings. In contrast, for the cognates setting the unit at which there are structural constraints is the entire set of cognates for a language pair and there is only one such unit in existence (for a given language pair). We call this a single overarching global structure to make the distinction clear. The method we present in this paper deals with a single overarching global structure on the predictions of all instances in the entire problem space for a task. For this type of setting, there is only a single global structure in existence, contrasted with the situation of there being many sentences each imposing a global structure on the tagging decisions for that individual sentence. Hence, previous structured prediction methods that require numerous instances each having a structured output on which to train parameters via methods such as perceptron training are inapplicable to the cognates setting. In this paper we present methods for rescoring effectively in settings with a single overarching global structure and show their applicability to improving the performance of cognates detection. Still, we note that philosophically our method builds on previous structured prediction methods since in both cases there is a similar intuition in that we’re 1984 using higher-level structural properties to inform and accordingly alter our system’s predictions of values for subitems within a structure. In section 2 we present our methods for performing rescoring of matrices based on global constraints such as those that apply for cognates detection. The key intuition behind our approach is that the scoring of word pairs for cognateness ought not be made independently as is currently done, but rather that global constraints ought to be taken into account to inform and potentially alter system scores for word pairs based on the scores of other word pairs. In section 3 we provide results of experiments testing the proposed methods on the cognates detection task on multiple datasets with multiple language pairs under multiple conditions. We show that the new methods complement and effectively improve performance over state of the art performance achieved by combining the major research breakthroughs that have taken place in cognates detection research to date. Complete precision-recall curves are provided that show the full range of performance improvements over the current state of the art that are achieved. Summary measurements of performance improvements, depending on the language pair and dataset, range from 6.73 absolute MaxF1 percentage points to 16.75 absolute MaxF1 percentage points and from 5.58 absolute 11-point interpolated average precision percentage points to 17.19 absolute 11-point interpolated average precision percentage points. Section 4 discusses the results and possible extensions of the method. Section 5 wraps up with the main conclusions. 2 Algorithm While our focus in this paper is on using global constraints to improve cognates detection, we believe that our method is useful more generally. We therefore abstract out some of the specifics of cognates detection and present our algorithm more generally in this section, with the hope that it will be able to be used in the future for other applications in addition to cognates detection. None of our abstraction harms understanding of our method’s applicability to cognates detection and the fact that the method may be more widely beneficial does not in any way detract from the utility we show it has for improving cognates detection. A common setting is where one has a set X = {x1, x2, ..., xn} and a set Y = {y1, y2, ..., yn} where the task is to extract (x, y) pairs such that (x, y) are in some relation R. Here are examples: • X might be a set of states and Y might be a set of cities and the relation R might be “is the capital of”; • X might be a set of images and Y might be a set of people’s names and the relation R might be “is a picture of”; • X might be a set of English words and Y might be a set of French words and the relation R might be “is cognate with”. A common way these problems are approached is that a model is trained that can score each pair (x, y) and those pairs with scores above a threshold are extracted. We propose that often the relation will have a tendency, or a hard constraint, to satisfy particular properties and that this ought to be utilized to improve the quality of the extracted pairs. The approach we put forward is to re-score each (x, y) pair by utilizing scores generated for other pairs and our knowledge of properties of the relation being extracted. In this paper, we present and evaluate methods for improving the scores of each (x, y) pair for the case when the relation is known to be one-to-one and discuss extensions to other situations. The current approach is to generate a matrix of scores for each candidate pair as follows: ScoreX,Y =   sx1,y1 · · · sx1,yn ... ... ... sxn,y1 · · · sxn,yn  . (1) Then those pairs with scores above a threshold are predicted as being in the relation. We now describe methods for sharpening the scores in the matrix by utilizing the fact that there is an overarching global structure on the predictions. 2.1 Reverse Rank We know that if (xi, yj) ∈R, then (xk, yj) /∈ R for k ̸= i when R is 1-to-1. We define reverse rank(xi, yj) = |{xk ∈X|sxk,yj ≥ sxi,yj}|. Intuitively, a high reverse rank means that there are lots of other elements of X that score better to yj than xi does; this could be evidence that (xi, yj) is not in R and ought to have a lower score. Alternatively, if there are very few or no other elements of X that score better to yj than xi does this 1985 could be evidence that (xi, yj) is in R and ought to have a higher score. In accord with this intuition, we use reverse rank as the basis for rescaling our scores as follows: scoreRR(xi, yj) = sxi,yj reverse rank(xi, yj). (2) 2.2 Forward Rank Analogous to reverse rank, another basis we can use for adjusting scores is the forward rank. We define forward rank(xi, yj) = |{yk ∈ Y |sxi,yk ≥sxi,yj}|. We then scale the scores analogously to how we did with reverse ranks via an inverse linear function.2 2.3 Combining Reverse Rank and Forward Rank For combining reverse rank and forward rank, we present results of experiments doing it two ways. The first is a 1-step approach: scoreRR FR 1step(xi, yj) = sxi,yj product, (3) where product =reverse rank(xi, yj)× forward rank(xi, yj). (4) The second combination method involves first computing the reverse rank and re-adjusting every score based on the reverse ranks. Then in a second step the new scores are used to compute forward ranks and then those scores are adjusted based on the forward ranks. We refer to this method as RR FR 2step. 2.4 Maximum Assignment If one makes the assumption that all elements in X and Y are present and have their partner element in the other set present with no extra elements and the sets are not too large, then it is interesting to compute what the ‘maximal assignment’ would be using the Hungarian Algorithm to optimize: max Z∈X×Y X (x,y)∈Z score(x, y) s.t. (xi, yj) ∈Z ⇒(xk, yj) /∈Z, ∀k ̸= i (xi, yj) ∈Z ⇒(xi, yk) /∈Z, ∀k ̸= j. (5) 2For both reverse rank and forward rank we also experimented with exponential decay and step functions, but found that simple division by the ranks worked as well or better than any of those more complicated methods. We do this on datasets where the assumptions hold and see how close our methods get to the Hungarian maximal assignment at similar points of the precision-recall curves. For our larger datasets where the assumptions don’t hold, the Hungarian either can’t complete due to limited computational resources or it functioned poorly in comparison with the performance of our reverse rank and forward rank combination methods. 3 Experiments Our goal is to test whether using the global structure algorithms we described in section 2 can significantly boost performance for cognates detection. To test this hypothesis, our first step is to implement a system that uses state of the art research results to generate the initial score matrices as a current state of the art system would currently do for this task. To that end, we implemented a baseline state of the art system that uses the information sources that previous research has found to be helpful for this task such as phonetic information, word context information, temporal context information, word frequency information, and word burstiness information (Kondrak, 2001; Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002; Klementiev and Roth, 2006; Irvine and Callison-Burch, 2013). Consistent with past work (Irvine and Callison-Burch, 2013), we use supervised training to learn the weights for combining the various information sources. The system combines the sources of information by using weights learned by an SVM (Support Vector Machine) on a small seed training set of cognates3 to optimize performance. This baseline system obtains state of the art performance on cognates detection. Using this state of the art system as our baseline, we investigated how much we could improve performance beyond current state of the art levels by applying the rescoring algorithm we described in section 2. We performed experiments on three language pairs: French-English, German-English, and Spanish-English, with different text corpora used as training and test data. The different language pairs and datasets have different levels of performance in terms of their baseline current state of the art score matrices. In the 3The small seed set was randomly selected and less than 20% in all cases. It was not used for testing. Note that using this data to optimize performance of the baseline system makes our baseline even stronger and makes it even harder for our new rescoring method to achieve larger improvements. 1986 next few subsections, we describe our experimental details. 3.1 Lemmatization We used morphological analyzers to convert the words in text corpora to lemma form. For English, we used the NLTK WordNetLemmatizer (Bird et al., 2009). For French, German, and Spanish we used the TreeTagger (Schmid, 1994). 3.2 Word Context Information We used the Google N-Gram corpus (Michel et al., 2010). For English we used the English 2012 Google 5-gram corpus, for French we used the French 2012 Google 5-gram corpus, for German we used the German 2012 Google 5-gram corpus, and for Spanish we used the Spanish 2012 Google 5-gram corpus. From these corpora we compute word context similarity scores across languages using Rapp’s method (Rapp, 1995, 1999). The intuition behind this method is that cognates are more likely to occur in correlating context windows and this statistic inferred from large amounts of data captures this correlation. 3.3 Frequency Information The intuition is that over large amounts of data cognates should have similar relative frequencies. We compute our relative frequencies by using the same corpora mentioned in the previous subsection. 3.4 Temporal Information The intuition is that cognates will have similar temporal distributions (Klementiev and Roth, 2006). To compute the temporal similarity we use newspaper data and convert it to simple daily word counts. For each word in the corpora the word counts create a time series vector. The Fourier transform is computed on the time series vectors. Spearman rank correlation is computed on the transform vectors. For English we used the English Gigaword Fifth Edition4. For French we used French Gigaword Third Edition5. For Spanish we used Spanish Gigaword First Edition6. The German news corpora were obtained by web crawling http: //www.tagesspiegel.de/ and extracting the news articles. 4Linguistic Data Consortium Catalog No. LDC2011T07 5Linguistic Data Consortium Catalog No. LDC2011T10 6Linguistic Data Consortium Catalog No. LDC2006T12 3.5 Word Burstiness The intuition is that cognates will have similar burstiness measures (Church and Gale, 1995). For word burstiness we used the same corpora as for the temporal corpora. 3.6 Phonetic Information The intuition is that cognates will have correspondences in how they are pronounced. For this, we compute a measurement based on Normalized Edit Distance (NED). 3.7 Combining Information Sources We combine the information sources by using a linear Support Vector Machine to learn weights for each of the information sources from a small seed training set of cognates. So our final score assigned to a candidate cognate pair (x, y) is: score(x, y) = X m∈metrics wmscorem(x, y), (6) where metrics is the set of measurements such as phonetic similarity measurements, word burstiness similarity, relative frequency similarity, etc. that were explained in subsections 3.2 through 3.6; wm is the learned weight for metric m; and scorem(x, y) is the score assigned to the pair (x, y) by metric m. The scores such assigned represent a state of the art approach for filling in the matrix identified in equation 1. At this point the matrix of scores would be used to predict cognates. We now turn to evaluation of the use of the global constraint rescoring methods from section 2 for improving performance beyond the state of the art levels. 3.8 Using Global Constraints to Rescore For our cognates data we used the French-English pairs from (Bergsma and Kondrak, 2007) and the German-English and Spanish-English pairs from (Beinborn et al., 2013). Figure 1 shows the precision-recall7 curves for 7Precision and recall are the standard measures used for systems that perform search. Precision is the percentage of predicted cognates that are indeed cognate. Recall is the percentage of cognates that are predicted as cognate. We vary the threshold that determines cognateness to generate all points along the Precision-Recall curve. We start with a very high threshold enabling precision of 100% and lower the threshold until recall of 100% is reached. In particular, we sort the test examples by score in descending order and then go down the list of scores in order to complete the entire precision-recall curve. 1987 0 20 40 60 80 100 Recall 0 20 40 60 80 100 Precision Precision-Recall Curves Baseline RR RR_FR_1step RR_FR_2step Max Assignment Max Assignment Score Figure 1: Precision-Recall Curves for French-English. Baseline denotes state of the art performance. French-English, Figure 2 shows the performance for German-English, and Figure 3 shows the performance for Spanish-English. Note that state of the art performance (denoted in the figures as Baseline) has very different performance across the three datasets, but in all cases the systems from section 2 that incorporate global constraints and perform rescoring greatly exceed current state of the art performance levels. The Max Assignment is really just the single point that the Hungarian finds. We drew lines connecting it, but keep in mind those lines are just connecting the single point to the endpoints. Max Assignment Score traces the precision-recall curve back from the Max Assignment by steadily increasing the threshold so that only points in the maximum assignment set with scores above the increasing threshold are predicted as cognate. For the non-max assignment curves, it is sometimes helpful to compute a single metric summarizing important aspects of the full curve. For this purpose, maxF1 and 11-point interpolated average precision are often used. MaxF1 is the F1 measure (i.e., harmonic mean of precision and recall) at the point on the precision-recall curve where F1 is highest. The interpolated precision pinterp at a given recall level r is defined as the highest precision level found for any recall level r′ ≥r: pinterp(r) = maxr′≥rp(r′). (7) The 11-point interpolated average precision (11-point IAP) is then the average of the pinterp at r = 0.0, 0.1, ..., 1.0. Table 1 shows these performance measures for French-English, Table 2 0 20 40 60 80 100 Recall 0 20 40 60 80 100 Precision Precision-Recall Curves Baseline RR RR_FR_1step RR_FR_2step Max Assignment Max Assignment Score Figure 2: Precision-Recall Curves for German-English. Baseline denotes state of the art performance. 0 20 40 60 80 100 Recall 0 20 40 60 80 100 Precision Precision-Recall Curves Baseline RR RR_FR_1step RR_FR_2step Max Assignment Max Assignment Score Figure 3: Precision-Recall Curves for Spanish-English. Baseline denotes state of the art performance. shows the results for German-English, and Table 3 show the results for Spanish-English. In all cases, using global structure greatly improves upon the state of the art baseline performance. In (Bergsma and Kondrak, 2007), for French-English data a result of 66.5 11-point IAP is reported for a situation where word alignments from a bitext are available and a result of 77.7 11-point IAP is reported for a situation where translation pairs are available in large quantities. The setting considered in the current paper is much more challenging since it does not use bilingual dictionaries or word alignments from bitexts. The setting in the current paper is the one mentioned as future work on page 663 of (Bergsma and Kondrak, 2007): ”In particular, we plan to investigate approaches that do not re1988 METHOD MAX F1 11-POINT IAP BASELINE 54.92 50.99 RR 62.94 59.62 RR FR 1STEP 68.35 64.42 RR FR 2STEP 69.72 67.29 Table 1: French-English Performance. BASELINE indicates current state of the art performance. METHOD MAX F1 11-POINT IAP BASELINE 21.38 16.25 RR 22.71 17.80 RR FR 1STEP 28.68 22.37 RR FR 2STEP 28.11 21.83 Table 2: German-English Performance. BASELINE indicates current state of the art performance. quire the bilingual dictionaries or bitexts to generate training data.” Note that the evaluation thus far is a bit artificial for real cognates detection because in a real setting you wouldn’t only be selecting matches for relatively small subsets of words that are guaranteed to have a cognate on the other side. Such was the case for our evaluation where the French-English set had approx. 600 cognate pairs, the German-English set had approx. 1000 pairs, and the Spanish-English set had approx. 3000 pairs. In a real setting, the system would have to consider words that don’t have a cognate match in the other language and not only words that were hand-selected and guaranteed to have cognates. We are not aware of others evaluating according to this much more difficult condition, but we think it is important to consider especially given the potential impacts it could have on the global structure methods we’ve put forward. Therefore, we run a second set of evaluations where we take the ten thousand most common words in our corpora for each of our languages, which contain many of the cognates from the standard test sets and we add in any remaining words from the standard test sets that didn’t make it into the top ten thousand. We then repeat each of the experiMETHOD MAX F1 11-POINT IAP BASELINE 56.26 57.03 RR 68.52 69.33 RR FR 1STEP 70.66 71.47 RR FR 2STEP 73.01 74.22 Table 3: Spanish-English Performance. BASELINE indicates current state of the art performance. 0 20 40 60 80 100 Recall 0 20 40 60 80 100 Precision Precision-Recall Curves Baseline RR RR_FR_1step RR_FR_2step Figure 4: Precision-Recall Curves for French-English (large data). Note that Baseline denotes state of the art performance. ments under this much more challenging condition. With approx. ten thousand squared candidates, i.e., approx. 100 million candidates, to consider for cognateness, this is a large data condition. The Hungarian didn’t run to completion on two of the datasets due to limited computational resources. On French-English it completed, but achieved poorer performance than any of the other methods. This makes sense as it is designed when there really is a bipartite matching to be found like in the artificial yet standard cognates evaluation that was just presented. When confronted with large amounts of words that create a much denser space and have no match at all on the other side the all or nothing assignments of the Hungarian are not ideal. The reverse rank and forward rank rescoring methods are still quite effective in improving performance although not by as much as they did in the small data results from above. Figure 4 shows the full precision-recall curves for French-English for the large data condition, Figure 5 shows the curves for German-English for the large data condition, and Figure 6 shows the results for Spanish-English for the large data condition. Tables 4 through 6 show the summary metrics for the three language pairs for the large data experiments. We can see that the reverse rank and forward rank methods of taking into account the global structure of interactions among predictions is still helpful, providing large improvements in performance even in this challenging large data condition over strong state of the art baselines that 1989 0 20 40 60 80 100 Recall 0 20 40 60 80 100 Precision Precision-Recall Curves Baseline RR RR_FR_1step RR_FR_2step Figure 5: Precision-Recall Curves for German-English (large data). Note that Baseline denotes state of the art performance. 0 20 40 60 80 100 Recall 0 20 40 60 80 100 Precision Precision-Recall Curves Baseline RR RR_FR_1step RR_FR_2step Figure 6: Precision-Recall Curves for Spanish-English (large data). Note that Baseline denotes state of the art performance. make cognate predictions independently of each other and don’t do any rescoring based on global constraints. 4 Discussion We believe that this work opens up new avenues for further exploration. A few of these include the following: • investigating the utility of applying and extending the method to other applications such as Information Extraction applications, many of which have similar global constraints as cognates detection; • investigating how to handle other forms of global structure including tendencies that are METHOD MAX F1 11-POINT IAP BASELINE 55.08 51.35 RR 60.88 58.79 RR FR 1STEP 65.87 63.55 RR FR 2STEP 65.76 65.26 Table 4: French-English Performance (large data). BASELINE indicates state of the art performance. METHOD MAX F1 11-POINT IAP BASELINE 21.25 16.17 RR 24.78 19.13 RR FR 1STEP 30.72 24.97 RR FR 2STEP 30.34 24.86 Table 5: German-English Performance (large data). BASELINE indicates state of the art performance. METHOD MAX F1 11-POINT IAP BASELINE 54.75 54.55 RR 62.52 61.42 RR FR 1STEP 66.45 65.89 RR FR 2STEP 66.38 65.5 Table 6: Spanish-English Performance (large data). BASELINE indicates state of the art performance. not necessarily hard constraints; • developing more theory to more precisely understand some of the nuances of using global structure when it’s applicable and making connections with other areas of machine learning such as semi-supervised learning, active learning, etc.; and • investigating how to have a machine learn that global structure exists and learn what form of global structure exists. 5 Conclusions Cognates detection is an interesting and challenging task. Previous work has yielded state of the art approaches that create a matrix of scores for all word pairs based on optimized weighted combinations of component scores computed on the basis of various helpful sources of information such as phonetic information, word context information, temporal context information, word frequency information, and word burstiness information. However, when assigning a score to a word pair, the current state of the art methods do not take into account scores assigned to other word pairs. We proposed a method for rescoring the matrix that cur1990 rent state of the art methods produce by taking into account the scores assigned to other word pairs. The methods presented in this paper are complementary to existing state of the art methods, easy to implement, computationally efficient, and practically effective in improving performance by large amounts. Experimental results reveal that the new methods significantly improve state of the art performance in multiple cognates detection experiments conducted on standard freely and publicly available datasets with different language pairs and various conditions such as different levels of baseline performance and different data size conditions, including with more realistic large data size conditions than have been evaluated with in the past. References Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2013. Cognate production using character-based machine translation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing, Nagoya, Japan, pages 883–891. http: //www.aclweb.org/anthology/I13-1112. Shane Bergsma and Grzegorz Kondrak. 2007. Alignment-based discriminative string similarity. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Computational Linguistics, Prague, Czech Republic, pages 656–663. http: //www.aclweb.org/anthology/P07-1083. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media, Inc., 1st edition. Alexandre Bouchard-Cˆot´e, David Hall, Thomas L. Griffiths, and Dan Klein. 2013. Automated Reconstruction of Ancient Languages using Probabilistic Models of Sound Change. Proceedings of the National Academy of Sciences 110:4224–4229. https://doi.org/10.1073/pnas.1204678110. Kenneth W. Church and William A. Gale. 1995. Poisson mixtures. Natural Language Engineering 1:163–190. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1–8. https://doi.org/10.3115/1118693.1118694. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics 31(1):25–70. https://doi.org/10.1162/0891201053630273. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume. Barcelona, Spain, pages 111–118. https://doi.org/10.3115/1218955.1218970. David Hall and Dan Klein. 2011. Large-scale cognate recovery. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Edinburgh, Scotland, UK., pages 344–354. http: //www.aclweb.org/anthology/D11-1032. Ann Irvine and Chris Callison-Burch. 2013. Supervised bilingual lexicon induction with multiple monolingual signals. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Atlanta, Georgia, pages 518–523. http://www.aclweb.org/ anthology/N13-1056. Alexandre Klementiev and Dan Roth. 2006. Weakly supervised named entity transliteration and discovery from multilingual comparable corpora. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Sydney, Australia, pages 817–824. https://doi.org/10.3115/1220175.1220278. Grzegorz Kondrak. 2001. Identifying cognates by phonetic and semantic similarity. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies. Association for Computational Linguistics, Stroudsburg, PA, USA, NAACL ’01, pages 1–8. http://www.aclweb.org/ anthology/N/N01/N01-1014.pdf. Grzegorz Kondrak, Daniel Marcu, and Kevin Knight. 2003. Cognates can improve statistical translation models. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: Companion Volume of the Proceedings of HLT-NAACL 2003–short Papers - Volume 2. Association for Computational Linguistics, Stroudsburg, PA, USA, NAACL-Short ’03, pages 46–48. http://www.aclweb.org/ anthology/N/N03/N03-2016.pdf. Gideon S. Mann and David Yarowsky. 2001. Multipath translation lexicon induction via bridge languages. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies. Association for Computational Linguistics, Stroudsburg, PA, USA, NAACL ’01, pages 1991 1–8. http://www.aclweb.org/anthology/ N/N01/N01-1020.pdf. Benjamin S. Mericli and Michael Bloodgood. 2012. Annotating cognates and etymological origin in Turkic languages. In Proceedings of the First Workshop on Language Resources and Technologies for Turkic Languages at the Eighth International Conference on Languange Resources and Evaluation (LREC’12). European Language Resources Association, Istanbul, Turkey, pages 47–51. http:// arxiv.org/abs/1501.03191. Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, The Google Books Team, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak, and Erez Lieberman Aiden. 2010. Quantitative analysis of culture using millions of digitized books. Science 331(6014):176–182. https://doi.org/10.1126/science.1199644. Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Cambridge, Massachusetts, USA, pages 320–322. https://doi.org/10.3115/981658.981709. Reinhard Rapp. 1999. Automatic identification of word translations from unrelated english and german corpora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, College Park, Maryland, USA, pages 519–526. https://doi.org/10.3115/1034678.1034756. Charles Schafer and David Yarowsky. 2002. Inducing translation lexicons via diverse similarity measures and bridge languages. In Proceedings of the 6th Conference on Natural language Learning - Volume 20. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 1–7. http://www.aclweb.org/anthology/ W/W02/W02-2026.pdf. Helmut Schmid. 1994. Part-of-speech tagging with neural networks. In COLING. pages 172–176. http://www.aclweb.org/anthology/C/ C94/C94-1027.pdf. Michel Simard, George F. Foster, and Pierre Isabelle. 1993. Using cognates to align sentences in bilingual corpora. In Proceedings of the 1993 Conference of the Centre for Advanced Studies on Collaborative Research: Distributed Computing - Volume 2. IBM Press, CASCON ’93, pages 1071–1082. Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005a. Learning structured prediction models: A large margin approach. In Proceedings of the 22nd International Conference on Machine learning. ACM, pages 896–903. Ben Taskar, Lacoste-Julien Simon, and Klein Dan. 2005b. A discriminative matching approach to word alignment. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Vancouver, British Columbia, Canada, pages 73–80. http://www.aclweb.org/ anthology/H/H05/H05-1010.pdf. 1992
2017
181
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1993–2003 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1182 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1993–2003 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1182 One-Shot Neural Cross-Lingual Transfer for Paradigm Completion Katharina Kann CIS LMU Munich, Germany [email protected] Ryan Cotterell Department of Computer Science Johns Hopkins University, USA [email protected] Hinrich Sch¨utze CIS LMU Munich, Germany [email protected] Abstract We present a novel cross-lingual transfer method for paradigm completion, the task of mapping a lemma to its inflected forms, using a neural encoder-decoder model, the state of the art for the monolingual task. We use labeled data from a high-resource language to increase performance on a lowresource language. In experiments on 21 language pairs from four different language families, we obtain up to 58% higher accuracy than without transfer and show that even zero-shot and one-shot learning are possible. We further find that the degree of language relatedness strongly influences the ability to transfer morphological knowledge. 1 Introduction Low-resource natural language processing (NLP) remains an open problem for many tasks of interest. Furthermore, for most languages in the world, highcost linguistic annotation and resource creation are unlikely to be undertaken in the near future. In the case of morphology, out of the 7000 currently spoken (Lewis, 2009) languages, only about 200 have computer-readable annotations (Sylak-Glassman et al., 2015) – although morphology is easy to annotate compared to syntax and semantics. Transfer learning is one solution to this problem: it exploits annotations in a high-resource language to train a system for a low-resource language. In this work, we present a method for cross-lingual transfer of inflectional morphology using an encoder-decoder recurrent neural network (RNN). This allows for the development of tools for computational morphology with limited annotated data. In many languages, individual lexical entries may be realized as distinct inflections of a single Present Past Indicative Indicative Sg Pl Sg Pl 1 sue˜no so˜namos so˜n´e so˜namos 2 sue˜nas so˜n´ais so˜naste so˜nasteis 3 sue˜na sue˜nan so˜n´o so˜naron Table 1: Partial inflection table for the Spanish verb so˜nar. lemma depending on the syntactic context. For example, the 3SgPresInd of the English verbal lemma to bring is brings. In morphologically rich languages, a lemma can have hundreds of individual forms. Thus, both generation and analysis of such morphological inflections are active areas of research in NLP and morphological processing has been shown to be a boon to several other down-stream applications, e.g., machine translation (Dyer et al., 2008), speech recognition (Creutz et al., 2007), parsing (Seeker and C¸ etino˘glu, 2015), keyword spotting (Narasimhan et al., 2014) and word embeddings (Cotterell et al., 2016b), inter alia. In this work, we focus on paradigm completion, a form of morphological generation that maps a given lemma to a target inflection, e.g., (bring, Past) 7→brought (with Past being the target tag). RNN sequence-to-sequence models (Sutskever et al., 2014; Bahdanau et al., 2015) are the state of the art for paradigm completion (Faruqui et al., 2016; Kann and Sch¨utze, 2016a; Cotterell et al., 2016a). However, these models require a large amount of data to achieve competitive performance; this makes them unsuitable for out-of-thebox application to paradigm completion in the low-resource scenario. To mitigate this, we consider transfer learning: we train an end-to-end neural system jointly with limited data from a lowresource language and a larger amount of data from a high-resource language. This technique allows 1993 the model to apply knowledge distilled from the high-resource training data to the low-resource language as needed. We conduct experiments on 21 language pairs from four language families, emulating a lowresource setting. Our results demonstrate successful transfer of morphological knowledge. We show improvements in accuracy and edit distance of up to 58% (accuracy) and 4.62 (edit distance) over the same model with only in-domain language data on the paradigm completion task. We further obtain up to 44% (resp. 14%) improvement in accuracy for the one-shot (resp. zero-shot) setting, i.e., one (resp. zero) in-domain language sample per target tag. We also show that the effectiveness of morphological transfer depends on language relatedness, measured by lexical similarity. 2 Inflectional Morphology and Paradigm Completion Many languages exhibit inflectional morphology, i.e., the form of an individual lexical entry mutates to show properties such as person, number or case. The citation form of a lexical entry is referred to as the lemma and the collection of its possible inflections as its paradigm. Tab. 1 shows an example of a partial paradigm; we display several forms for the Spanish verbal lemma so˜nar. We may index the entries of a paradigm by a morphological tag, e.g., the 2SgPresInd form sue˜nas in Tab. 1. In generation, the speaker must select an entry of the paradigm given the form’s context. In general, the presence of rich inflectional morphology is problematic for NLP systems as it greatly increases the token-type ratio and, thus, word form sparsity. An important task in inflectional morphology is paradigm completion (Durrett and DeNero, 2013; Ahlberg et al., 2014; Nicolai et al., 2015; Cotterell et al., 2015; Faruqui et al., 2016). Its goal is to map a lemma to all individual inflections, e.g., (so˜nar, 1SgPresInd) 7→sue˜no. There are good solutions for paradigm completion when a large amount of annotated training data is available (Cotterell et al., 2016a).1 In this work, we address the lowresource setting, a yet unsolved challenge. 1The SIGMORPHON 2016 shared task (Cotterell et al., 2016a) on morphological reinflection, a harder generalization of paradigm completion, found that ≥98% accuracy can be achieved in many languages with neural sequence-to-sequence models, improving the state of the art by 10%. 2.1 Transferring Inflectional Morphology In comparison to other NLP annotations, e.g., partof-speech (POS) and named entities, morphological inflection is especially challenging for transfer learning: we can define a universal set of POS tags (Petrov et al., 2012) or of entity types (e.g., coarsegrained types like person and location or finegrained types (Yaghoobzadeh and Sch¨utze, 2015)), but inflection is much more language-specific. It is infeasible to transfer morphological knowledge from Chinese to Portuguese as Chinese does not use inflected word forms. Transferring named entity recognition, however, among Chinese and European languages works well (Wang and Manning, 2014a). But even transferring inflectional paradigms from morphologically rich Arabic to Portuguese seems difficult as the inflections often mark dissimilar subcategories. In contrast, transferring morphological knowledge from Spanish to Portuguese, two languages with similar conjugations and 89% lexical similarity, appears promising. Thus, we conjecture that transfer of inflectional morphology is only viable among related languages. 2.2 Formalization of the Task We now offer a formal treatment of the crosslingual paradigm completion task and develop our notation. Let Σℓbe a discrete alphabet for language ℓand let Tℓbe a set of morphological tags for ℓ. Given a lemma wℓin ℓ, the morphological paradigm (inflectional table) π can be formalized as a set of pairs π(wℓ) = nfk[wℓ], tk o k∈T(wℓ) (1) where fk[wℓ] ∈Σ+ ℓis an inflected form, tk ∈Tℓis its morphological tag and T(wℓ) is the set of slots in the paradigm; e.g., a Spanish paradigm is: π(so˜nar)= nsue˜no, 1SgPresInd  , . . . , so˜naran, 3PlPastSbj o Paradigm completion consists of predicting missing slots in the paradigm π(wℓ) of a given lemma wℓ. In cross-lingual paradigm completion, we consider a high-resource source language ℓs (lots of training data available) and a low-resource target language ℓt (little training data available). We denote the source training examples as Ds (with |Ds| = ns) and the target training examples as 1994 Dt (with |Dt| = nt). The goal of cross-lingual paradigm completion is to populate paradigms in the low-resource target language with the help of data from the high-resource source language, using only few in-domain examples. 3 Cross-Lingual Transfer as Multi-Task Learning We describe our probability model for morphological transfer using terminology from multi-task learning (Caruana, 1997; Collobert et al., 2011). We consider two tasks, training a paradigm completor (i) for a high-resource language and (ii) for a low-resource language. We want to train jointly, so we reap the benefits of having related languages. Thus, we define the log-likelihood as L(θ)= X (k,wℓt)∈Dt log pθ (fk[wℓt] | wℓt, tk, λℓt) (2) + X (k,wℓs)∈Ds log pθ(fk[wℓs] | wℓs, tk, λℓs) where we tie parameters θ for the two languages together to allow the transfer of morphological knowledge between languages. The λs are special language tags, cf. Sec. 3.2. Each probability distribution pθ defines a distribution over all possible realizations of an inflected form, i.e., a distribution over Σ∗. For example, consider the related Romance languages Spanish and French; focusing on one term from each of the summands in Eq. (2) (the past participle of the translation of to visit in each language), we arrive at Lvisit(θ) = log pθ(visitado | visitar, PastPart, ES) + log pθ(visit´e | visiter, PastPart, FR) (3) Our cross-lingual setting forces both transductions to share part of the parameter vector θ, to represent morphological regularities between the two languages in a common embedding space and, thus, to enable morphological transfer. This is no different from monolingual multi-task settings, e.g., jointly training a parser and tagger for transfer of syntax. Based on recent advances in neural transducers, we parameterize each distribution as an encoderdecoder RNN, as in (Kann and Sch¨utze, 2016b). In their setup, the RNN encodes the input and predicts the forms in a single language. In contrast, we force the network to predict two or more languages. 3.1 Encoder-Decoder RNN We parameterize the distribution pθ as an encoderdecoder gated RNN (GRU) with attention (Bahdanau et al., 2015), the state-of-the-art solution for the monolingual case (Kann and Sch¨utze, 2016b). A bidirectional gated RNN encodes the input sequence (Cho et al., 2014) – the concatenation of (i) the language tag, (ii) the morphological tag of the form to be generated and (iii) the characters of the input word – represented by embeddings. The input to the decoder consists of concatenations of −→ hi and ←− hi, the forward and backward hidden states of the encoder. The decoder, a unidirectional RNN, uses attention: it computes a weight αi for each hi. Each weight reflects the importance given to that input position. Using the attention weights, the probability of the output sequence given the input sequence is: p(y | x1, . . . , x|X|) = |Y | Y t=1 g(yt−1, st, ct) (4) where y = (y1, . . . , y|Y |) is the output sequence (a sequence of |Y | characters), x = (x1, . . . x|X|) is the input sequence (a sequence of |X| characters), g is a non-linear function, st is the hidden state of the decoder and ct is the sum of the encoder states hi, weighted by attention weights αi(st−1) which depend on the decoder state: ct = |X| X i=1 αi(st−1)hi (5) Fig. 1 shows the encoder-decoder. See Bahdanau et al. (2015) for further details. 3.2 Input Format Each source form is represented as a sequence of characters; each character is represented as an embedding. In the same way, each source tag is represented as a sequence of subtags, and each subtag is represented as an embedding. More formally, we define the alphabet Σ = ∪ℓ∈LΣℓas the set of characters in the languages in L, with L being the set of languages in the given experiment. Next, we define S as the set of subtags that occur as part of the set of morphological tags T = ∪ℓ∈LTℓ; e.g., if 1SgPresInd ∈T , then 1, Sg, Pres, Ind ∈S. Note that the set of subtags S is defined as attributes from the UNIMORPH schema (Sylak-Glassman, 2016) and, thus, is universal across languages; the schema is 1995 ! h1 ! h2 ! h3 ! hN h1 h2 h3 hN s o ñ r s u e s1 s2 s3 sN y1= y2= y3=M … Figure 1: Encoder-decoder RNN for paradigm completion. The lemma so˜nar is mapped to a target form (e.g., sue˜na). For brevity, language and target tags are omitted from the input. Thickness of red arrows symbolizes the degree to which the model attends to the corresponding hidden state of the encoder. derived from research in linguistic typology.2 The format of the input to our system is S+Σ+. The output format is Σ+. Both input and output are padded with distinguished BOW and EOW symbols. What we have described is the representation of Kann and Sch¨utze (2016b). In addition, we preprend a symbol λ ∈L to the input string (e.g., λ = Es, also represented by an embedding), so the RNN can handle multiple languages simultaneously and generalize over them. Thus, our final input is of the form λS+Σ+. 4 Languages and Language Families To verify the applicability of our method to a wide range of languages, we perform experiments on example languages from several different families. Romance languages, a subfamily of IndoEuropean, are widely spoken, e.g., in Europe and Latin America. Derived from the common ancestor Vulgar Latin (Harris and Vincent, 2003), they share large parts of their lexicon and inflectional morphology; we expect knowledge among them to be easily transferable. 2Note that while the subtag set is universal, which subtags a language actually uses is language-specific; e.g., Spanish does not mark animacy as Russian does. We contrast this with the universal POS set (Petrov et al., 2012), where it is more likely that we see all 17 tags in most languages. PT CA IT FR similarity to ES 89% 85% 82% 75% Table 2: Lexical similarities for Romance (Lewis, 2009). We experiment on Catalan, French, Italian, Portuguese and Spanish. Tab. 2 shows that Spanish – which takes the role of the low-resource language in our experiments – is closely related with the other four, with Portuguese being most similar. We hypothesize that the transferability of morphological knowledge between source and target corresponds to the degree of lexical similarity; thus, we expect Portuguese and Catalan to be more beneficial for Spanish than Italian and French. The Indo-European Slavic language family has its origin in eastern-central Europe (Corbett and Comrie, 2003). We experiment on Bulgarian, Macedonian, Russian and Ukrainian (Cyrillic script) and on Czech, Polish and Slovene (Latin script). Macedonian and Ukranian are low-resource languages, so we assign them the low-resource role. For Romance and for Uralic, we experiment with groups containing three or four source languages. To arrive at a comparable experimental setup for Slavic, we run two experiments, each with three source and one target language: (i) from Russian, Bulgarian and Czech to Macedonian; and (ii) from Russian, Polish and Slovene to Ukrainian. We hope that the paradigm completor learns similar embeddings for, say, the characters “e” in Polish and “ϵ” in Ukrainian. Thus, the use of two scripts in Slavic allows us to explore transfer across different alphabets. We further consider a non-Indo-European language family, the Uralic languages. We experiment on the three most commonly spoken languages – Finnish, Estonian and Hungarian (Abondolo, 2015) – as well as Northern Sami, a language used in Northern Scandinavia. While Finnish and Estonian are closely related (both are members of the Finnic subfamily), Hungarian is a more distant cousin. Estonian and Northern Sami are lowresource languages, so we assign them the lowresource role, resulting in two groups of experiments: (i) Finnish, Hungarian and Estonian to Northern Sami; (ii) Finnish, Hungarian and Northern Sami to Estonian. Arabic (baseline) is a Semitic language (part of the Afro-Asiatic family (Hetzron, 2013)) that is 1996 spoken in North Africa, the Arabian Peninsula and other parts of the Middle East. It is unrelated to all other languages used in this work. Both in terms of form (new words are mainly built using a templatic system) and categories (it has tags such as construct state), Arabic is very different. Thus, we do not expect it to support morphological knowledge transfer and use it as a baseline for all target languages. 5 Experiments We run four experiments on 21 distinct pairings of languages to show the feasibility of morphological transfer and analyze our method. We first discuss details common to all experiments. We keep hyperparameters during all experiments (and for all languages) fixed to the following values. Encoder and decoder RNNs each have 100 hidden units and the size of all subtag, character and language embeddings is 300. For training we use ADADELTA (Zeiler, 2012) with minibatch size 20. All models are trained for 300 epochs. Following Le et al. (2015), we initialize all weights in the encoder, decoder and the embeddings except for the GRU weights in the decoder to the identity matrix. Biases are initialized to zero. Evaluation metrics: (i) 1-best accuracy: the percentage of predictions that match the true answer exactly; (ii) average edit distance between prediction and true answer. The two metrics differ in that accuracy gives no partial credit and incorrect answers may be drastically different from the annotated form without incurring additional penalty. In contrast, edit distance gives partial credit for forms that are closer to the true answer. 5.1 Exp. 1: Transfer Learning for Paradigm Completion In this experiment, we investigate to what extent our model transfers morphological knowledge from a high-resource source language to a low-resource target language. We experimentally answer three questions. (i) Is transfer learning possible for morphology? (ii) How much annotated data do we need in the low-resource target language? (iii) How closely related must the two languages be to achieve good results? Data. Based on complete inflection tables from unimorph.org (Kirov et al., 2016), we create datasets as follows. Each training set consists of 12,000 samples in the high-resource source 50·20 50·21 50·22 50·23 50·24 50·25 50·26 50·27 Number of Samples 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Languages Pt Ca It Fr Ar Es Figure 2: Learning curves showing the accuracy on Spanish test when training on language λ ∈ {PT, CA, IT, FR, AR, ES}. Except for λ=ES, each model is trained on 12,000 samples from λ and “Number of Samples” (x-axis) of Spanish. language and nt∈{50, 200} samples in the lowresource target language. We create target language dev and test sets of sizes 1600 and 10,000, respectively.3 For Romance and Arabic, we create learning curves for nt∈{100, 400, 800, 1600, 3200, 6400, 12000}. Due to the data available to us, we use only verbs for the Romance and Uralic language families, but nouns, verbs and adjectives for the Slavic language family and Arabic. Lemmata and inflections are randomly selected from all available paradigms. Results and Discussion. Tab. 3 shows the effectiveness of transfer learning. There are two baselines. (i) “0”: no transfer, i.e., we consider only in-domain data; (ii) “AR”: Arabic, which is unrelated to all target languages. With the exception of the 200 sample case of ET→SME, cross-lingual transfer is always better than the two baselines; the maximum improvement is 0.58 (0.58 vs. 0.00) in accuracy for the 50 sample case of CA→ES. More closely related source languages improve performance more than distant ones. French, the Romance language least similar to Spanish, performs worst for →ES. For the target language Macedonian, Bulgarian provides most benefit. This can again be explained by similarity: Bulgarian is closer to Macedonian than the other languages in this group. The best result for Ukrainian is RU→UK. Unlike Polish and Slowenian, Russian is the only language in this group that uses the same script as Ukrainian, showing 3For Estonian, we use 7094 (not 12,000) train and 5000 (not 10,000) test samples as more data is unavailable. 1997 Romance Slavic I Slavic II Uralic I Uralic II source 0 AR PT CA IT FR 0 AR RU BG CS 0 AR RU PL SL 0 AR FI HU ET 0 AR FI HU SME target →ES →MK →UK →SME →ET 50 acc ↑ 0.00 0.04 0.48 0.58 0.46 0.29 0.00 0.00 0.23 0.47 0.13 0.01 0.01 0.47 0.16 0.07 0.00 0.01 0.07 0.05 0.03 0.02 0.01 0.35 0.21 0.17 ED ↓ 5.42 4.06 0.85 0.80 1.15 1.82 5.71 5.59 1.61 0.87 2.32 5.23 4.80 0.77 2.14 3.12 6.21 5.47 2.88 3.46 3.71 4.50 4.51 1.55 2.19 2.60 200 acc ↑ 0.38 0.54 0.62 0.78 0.74 0.60 0.21 0.40 0.62 0.77 0.57 0.16 0.21 0.64 0.55 0.50 0.13 0.24 0.26 0.28 0.13 0.34 0.53 0.74 0.71 0.66 ED ↓ 1.37 0.87 0.57 0.39 0.44 0.82 1.93 1.12 0.68 0.36 0.72 2.09 1.60 0.49 0.73 0.82 2.94 1.89 1.78 1.61 2.46 1.47 0.98 0.41 0.48 0.62 Table 3: Accuracy (acc; the higher the better; indicated by ↑) and edit distance (ED; the lower the better; indicated by ↓) of cross-lingual transfer learning for paradigm completion. The target language is indicated by “→”, e.g., it is Spanish for “→ES”. Sources are indicated in the row “source”; “0” is the monolingual case. Except for Estonian, we train on ns = 12,000 source samples and nt ∈{50, 200} target samples (as indicated by the row). There are two baselines in the table. (i) “0”: no transfer, i.e., we consider only in-domain data; (ii) “AR”: the Semitic language Arabic is unrelated to all target languages and functions as a dummy language that is unlikely to provide relevant information. All languages are denoted using the official codes (SME=Northern Sami). the importance of the alphabet for transfer. Still, the results also demonstrate that transfer works across alphabets (although not as well); this suggests that similar embeddings for similar characters have been learned. Finnish is the language that is closest to Estonian and it again performs best as a source language for Estonian. For Northern Sami, transfer works least well, probably because the distance between sources and target is largest in this case. The distance of the Sami languages from the Finnic (Estonian, Finnish) and Ugric (Hungarian) languages is much larger than the distances within Romance and within Slavic. However, even for Northern Sami, the worst performing language, adding an additional language is still always beneficial compared to the monolingual baseline. Learning curves for Romance and Arabic further support our finding that language similarity is important. In Fig. 2, knowledge is transferred to Spanish, and a baseline – a model trained only on Spanish data – shows the accuracy obtained without any transfer learning. Here, Catalan and Italian help the most, followed by Portuguese, French and, finally, Arabic. This corresponds to the order of lexical similarity with Spanish, except for the performance of Portuguese (cf. Tab. 2). A possible explanation is the potentially confusing overlap of lemmata between the two languages – cf. discussion in the next subsection. That the transfer learning setup improves performance for the unrelated language Arabic as source is at first surprising. However, adding new samples to a small training set helps prevent overfitting (e.g., rote memorization) even if the source is a morphologically unrelated language; effectively acting as a regularizer. Following (Kann and Sch¨utze, 2016b) we did not use standard regularizers. To verify that the effect of Arabic is mainly a regularization effect, we ran a small monolingual experiment on ES (200 setting) with dropout 0.5 (Srivastava et al., 2014). The resulting accuracy is 0.57, very similar to the comparable Arabic number of 0.54 in the table. The accuracy for dropout and 50 ES samples stays at 0.00, showing that in extreme low-resource settings an unrelated language might be preferable to a standard regularizer. Error Analysis for Romance. Even for only 50 Spanish instances, many inflections are correctly produced in transfer. For, e.g., (criar, 3PlFutSbj) 7→criaren, model outputs are: fr: criaren, ca: criaren, es: crntaron, it: criaren, ar: ecriren, pt: criaren (all correct except for the two baselines). Many errors involve accents, e.g., (contrastar, 2PlFutInd) 7→contrastar´eis; model outputs are: fr: contrastareis, ca: contrastareis, es: conterar´ıan, it: contrastareis, ar: contastar´ıas, pt: contrastareis. Some inflected forms are produced incorrectly by all systems, mainly because they apply the inflectional rules of the source language directly to the target. Finally, the output of the model trained on Portuguese contains a class of errors that are unlike those of other systems. Example: (contraatacar, 1SgCond) 7→contraatacar´ıa with the following solutions: fr: contratacar´ıam, ca: contraatacar´ıa, es: concarnar, it: contratac´e, ar: cuntatar´ıa and pt: contra-atacar´ıa. The Portuguese model inserts “-” because Portuguese train data contains contraatacar and “-” appears in its inflected form. Thus, it seems that shared lemmata between the highresource source language and the low-resource target language hurt our model’s performance.4 An 4To investigate this in more detail we retrain the Portuguese model with 50 Spanish samples, but exclude all lemmata that appear in Spanish train/dev/test, resulting in only 3695 1998 PT CA IT CA&PT CA&IT →ES 50 acc ↑ 0.48 0.58 0.46 0.56 0.58 ED ↓ 0.85 0.80 1.15 0.67 0.82 200 acc ↑ 0.62 0.78 0.74 0.77 0.79 ED ↓ 0.47 0.39 0.44 0.34 0.31 Table 4: Results for transfer from pairs of source languages to ES. Results from single languages are repeated for comparison. example for the generally improved performance across languages for 200 Spanish training samples is (contrastar, 2PlIndFut) 7→contrastar´eis: all models now produce the correct form. 5.2 Exp. 2: Multiple Source Languages We now want to investigate the effect of multiple source languages. Data. Our experimental setup is similar to §5.1: we use the same dev, test and low-resource train sets as before. However, we limit this experiment to the Romance language family and the highresource train data consists of samples from two different source languages at once. Choosing those which have the highest accuracies on their own, we experiment with the following pairs: CA&PT, as well as CA&IT. In order to keep all experiments easily comparable, we use half of each source language’s data, again ending up with a total of 12,000 high-resource samples. Results and Discussion. Results are shown in Tab. 4. Training on two source languages improves over training on a single one. Increases in accuracy are minor, but edit distance is reduced by up to 0.13 (50 low-resource samples) and 0.08 (200 lowresource samples). That using data from multiple languages is beneficial might be due to a weaker tendency of the final model to adapt wrong rules from the source language, since different alternatives are presented during training. 5.3 Exp. 3: Zero-Shot/One-Shot Transfer In §5.1, we investigated the relationship between indomain (target) training set size and performance. Here, we look at the extreme case of training set sizes 1 (one-shot) and 0 (zero-shot) for a tag. We train our model on a single sample for half of the tags appearing in the low-resource language, i.e., training samples. Accuracy on test increases by 0.09 despite the reduced size of the training set. 0 PT CA IT FR AR →ES one shot acc ↑ 0.00 0.44 0.39 0.23 0.13 0.00 ED ↓ 6.26 1.01 1.27 1.83 2.87 7.00 zero shot acc ↑ 0.00 0.14 0.08 0.01 0.02 0.00 ED ↓ 7.18 1.95 1.99 3.12 4.27 7.50 Table 5: Results for one-shot and zero-shot transfer learning. Formatting is the same as for Tab. 3. We still use ns = 12000 source samples. In the oneshot (resp. zero-shot) case, we observe exactly one form (resp. zero forms) for each tag in the target language at training time. if Tℓis the set of morphological tags for the target language, train set size is |Tℓ|/2. As before, we add 12,000 source samples. We report one-shot accuracy (resp. zero-shot accuracy), i.e., the accuracy for samples with a tag that has been seen once (resp. never) during training. Note that the model has seen the individual subtags each tag is composed of.5 Data. Now, we use the same dev, test and highresource train sets as in §5.1. However, the lowresource data is created in the way specified above. To remove a potentially confounding variable, we impose the condition that no two training samples belong to the same lemma. Results and Discussion. Tab. 5 shows that the Spanish and Arabic systems do not learn anything useful for either half of the tags. This is not surprising as there is not enough Spanish data for the system to generalize well and Arabic does not contribute exploitable information. The systems trained on French and Italian, in contrast, get a nonzero accuracy for the zero-shot case as well as 0.13 and 0.23, respectively, in the one-shot case. This shows that a single training example is sometimes sufficient for successful generation although generalization to tags never observed is rarely possible. Catalan and Portuguese show the best performance in both settings; this is intuitive since they are the languages closest to the target (cf. Tab. 2). In fact, adding Portuguese to the training data yields an absolute increase in accuracy of 0.44 (0.44 vs. 0.00) for one-shot and 0.14 (0.14 vs. 0.00) for zero-shot with corresponding improvements in edit distance. Overall, this experiment shows that with transfer learning from a closely related language the per5It is very unlikely that due to random selection a subtag will not be in train; this case did not occur in our experiments. 1999 formance of zero-shot morphological generation improves over the monolingual approach, and, in the one-shot setting, it is possible to generate the right form nearly half the time. 5.4 Exp. 4: True Transfer vs. Other Effects We would like to separate the effects of regularization that we saw for Arabic from true transfer. To this end, we generate a random cipher (i.e., a function γ : Σ ∪S 7→Σ ∪S) and apply it to all word forms and morphological tags of the high-resource train set; target language data are not changed. Ciphering makes it harder to learn true “linguistic” transfer of morphology. Consider the simplest case of transfer: an identical mapping in two languages, e.g., (visitar, 1SgPresInd) 7→visito in both Portuguese and Spanish. If we transform Portuguese using the cipher γ(iostv...) = kltqa..., then visito becomes aktkql in Portuguese and its tag becomes similarly unrecognizable as being identical to the Spanish tag 1SgPresInd. Our intuition is that ciphering will disrupt transfer of morphology.6 On the other hand, the regularization effect we observed with Arabic should still be effective. Data. We use the Portuguese-Spanish and Arabic-Spanish data from §5.1. We generate a random cipher and apply it to morphological tags and word forms for Portuguese and Arabic. The language tags are kept unchanged. Spanish is also not changed. For comparability with Tab. 3, we use the same dev and test sets as before. Results and Discussion. Tab. 6 shows that performance of PT→ES drops a lot: from 0.48 to 0.09 for 50 samples and from 0.62 to 0.54 for 200 samples. This is because there are no overt similarities between the two languages left after applying the cipher, e.g., the two previously identical forms visito are now different. The impact of ciphering on AR→ES varies: slightly improved in one case (0.54 vs. 0.56), slightly worse in three cases. We also apply the cipher to the tags and Arabic and Spanish share subtags, e.g., Sg. Just the knowledge that something is a subtag is helpful because subtags must not be generated as part of the output. We can explain the tendency of ciphering to decrease performance on AR→ES by the “masking” of common subtags. 6Note that ciphered input is much harder than transfer between two alphabets (Latin/Cyrillic) because it creates ambiguous input. In the example, Spanish “i” is totally different from Portuguese “i” (which is really “k”), but the model must use the same representation. 0→ES PT→ES AR→ES orig ciph orig ciph 50 acc ↑ 0.00 0.48 0.09 0.04 0.02 ED ↓ 5.42 0.85 3.25 4.06 4.62 200 acc ↑ 0.38 0.62 0.54 0.54 0.56 ED ↓ 1.37 0.57 0.95 0.87 0.93 Table 6: Results for ciphering. “0→ES” and “orig” are original results, copied from Tab. 3; “ciph” is the result after the cipher has been applied. For 200 samples and ciphering, there is no clear difference in performance between Portuguese and Arabic. However, for 50 samples and ciphering, Portuguese (0.09) seems to perform better than Arabic (0.02) in accuracy. Portuguese uses suffixation for inflection whereas Arabic is templatic and inflectional changes are not limited to the end of the word. This difference is not affected by ciphering. Perhaps even ciphered Portugese lets the model learn better that the beginnings of words just need to be copied. For 200 samples, the Spanish dataset may be large enough, so that ciphered Portuguese no longer helps in this regard. Comparing no transfer with transfer from a ciphered language to Spanish, we see large performance gains, at least for the 200 sample case: 0.38 (0→ES) vs. 0.54 (PT→ES) and 0.56 (AR→ES). This is evidence that our conjecture is correct that the baseline Arabic mainly acts as a regularizer that prevents the model from memorizing the training set and therefore improves performance. So performance improves even though no true transfer of morphological knowledge takes place. 6 Related Work Cross-lingual transfer learning has been used for many tasks, e.g., automatic speech recognition (Huang et al., 2013), parsing (Cohen et al., 2011; Søgaard, 2011; Naseem et al., 2012; Ammar et al., 2016), language modeling (Tsvetkov et al., 2016), entity recognition (Wang and Manning, 2014b) and machine translation (Johnson et al., 2016; Ha et al., 2016). One straightforward method is to translate datasets and then train a monolingual model (Fortuna and Shawe-Taylor, 2005; Olsson et al., 2005). Also, aligned corpora have been used to project information from annotations in one language to another (Yarowsky et al., 2001; Pad´o and Lapata, 2005). The drawback is that machine translation 2000 errors cause errors in the target. Therefore, alternative methods have been proposed, e.g., to port a model trained on the source language to the target language (Shi et al., 2010). In the realm of morphology, Buys and Botha (2016) recently adapted methods for the training of POS taggers to learn weakly supervised morphological taggers with the help of parallel text. Snyder and Barzilay (2008a, 2008b) developed a non-parametric Bayesian model for morphological segmentation. They performed identification of cross-lingual abstract morphemes and segmentation simultaneously and reported, similar to us, best results for related languages. Work on paradigm completion has recently been encouraged by the SIGMORPHON 2016 shared task on morphological reinflection (Cotterell et al., 2016a). Some work first applies an unsupervised alignment model to source and target string pairs and then learns a string-to-string mapping (Durrett and DeNero, 2013; Nicolai et al., 2015), using, e.g., a semi-Markov conditional random field (Sarawagi and Cohen, 2004). Encoderdecoder RNNs (Aharoni et al., 2016; Faruqui et al., 2016; Kann and Sch¨utze, 2016b), a method which our work further develops for the cross-lingual scenario, define the current state of the art. Encoder-decoder RNNs were developed in parallel by Cho et al. (2014) and Sutskever et al. (2014) for machine translation and extended by Bahdanau et al. (2015) with an attention mechanism, supporting better generalization. They have been applied to NLP tasks like speech recognition (Graves and Schmidhuber, 2005; Graves et al., 2013), parsing (Vinyals et al., 2015) and segmentation (Kann et al., 2016). More recently, a number of papers have used encoder-decoder RNNs in multitask and transfer learning settings; this is mainly work in machine translation: (Dong et al., 2015; Zoph and Knight, 2016; Chu et al., 2017; Johnson et al., 2016; Luong et al., 2016; Firat et al., 2016; Ha et al., 2016), inter alia. Each of these papers has both similarities and differences with our approach. (i) Most train several distinct models whereas we train a single model on input augmented with an explicit encoding of the language (similar to (Johnson et al., 2016)). (ii) Let k and m be the number of different input and output languages. We address the case k ∈{1, 2, 3} and m = k. Other work has addressed cases with k > 3 or m > 3; this would be an interesting avenue of future research for paradigm completion. (iii) Whereas training RNNs in machine translation is hard, we only experienced one difficult issue in our experiments (due to the low-resource setting): regularization. (iv) Some work is word- or subword-based, our work is character-based. The same way that similar word embeddings are learned for the inputs cow and vache (French for “cow”) in machine translation, we expect similar embeddings to be learned for similar Cyrillic/Latin characters. (v) Similar to work in machine translation, we show that zero-shot (and, by extension, one-shot) learning is possible. (Ha et al., 2016) (which was developed in parallel to our transfer model although we did not prepublish our paper on arxiv) is most similar to our work. Whereas Ha et al. (2016) address machine translation, we focus on the task of paradigm completion in low-resource settings and establish the state of the art for this problem. 7 Conclusion We presented a cross-lingual transfer learning method for paradigm completion, based on an RNN encoder-decoder model. Our experiments showed that information from a high-resource language can be leveraged for paradigm completion in a related low-resource language. Our analysis indicated that the degree to which the source language data helps for a certain target language depends on their relatedness. Our method led to significant improvements in settings with limited training data – up to 58% absolute improvement in accuracy – and, thus, enables the use of state-of-the-art models for paradigm completion in low-resource languages. 8 Future Work In the future, we want to develop methods to make better use of languages with different alphabets or morphosyntactic features, in order to increase the applicability of our knowledge transfer method. Acknowledgments We would like to thank the anonymous reviewers for their insightful comments. We are grateful to Siemens and Volkswagenstiftung for their generous support. This research would not have been possible without the organizers of the SIGMORPHON shared task, especially John Sylak-Glassman and Christo Kirov, who created the resources we use. 2001 References Daniel Abondolo. 2015. The Uralic Languages. Routledge. Roee Aharoni, Yoav Goldberg, and Yonatan Belinkov. 2016. Improving sequence to sequence learning for morphological inflection generation: The BIU-MIT systems for the SIGMORPHON 2016 shared task for morphological reinflection. In SIGMORPHON. Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2014. Semi-supervised learning of morphological paradigms and lexicons. In EACL. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016. Many languages, one parser. TACL 4:431–444. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Jan Buys and Jan A Botha. 2016. Cross-lingual morphological tagging for low-resource languages. In ACL. Rich Caruana. 1997. Multitask learning. Machine Learning 28(1):41–75. Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint 1409.1259 . Chenhui Chu, Raj Dabre, and Sadao Kurohashi. 2017. An empirical comparison of simple domain adaptation methods for neural machine translation. arXiv preprint 1701.03214 . Shay B Cohen, Dipanjan Das, and Noah A Smith. 2011. Unsupervised structure prediction with non-parallel multilingual guidance. In EMNLP. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR 12(Aug):2493–2537. Greville Corbett and Bernard Comrie. 2003. The Slavonic Languages. Routledge. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016a. The SIGMORPHON 2016 shared task— morphological reinflection. In SIGMORPHON. Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2015. Modeling word forms using latent underlying morphs and phonology. TACL 3:433–447. Ryan Cotterell, Hinrich Sch¨utze, and Jason Eisner. 2016b. Morphological smoothing and extrapolation of word embeddings. In ACL. Mathias Creutz, Teemu Hirsim¨aki, Mikko Kurimo, Antti Puurula, Janne Pylkk¨onen, Vesa Siivola, Matti Varjokallio, Ebru Arisoy, Murat Sarac¸lar, and Andreas Stolcke. 2007. Analysis of morph-based speech recognition and the modeling of out-ofvocabulary words across languages. In NAACLHLT. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In ACL-IJCNLP. Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In NAACL. Christopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing word lattice translation. In ACL. Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In NAACL. Orhan Firat, KyungHyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. CoRR abs/1601.01073. Blaz Fortuna and John Shawe-Taylor. 2005. The use of machine translation tools for cross-lingual text mining. In ICML Workshop on Learning with Multiple Views. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In IEEE. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks 18(5):602–610. Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. arXiv preprint 1611.04798 . Martin Harris and Nigel Vincent. 2003. The Romance languages. Routledge. Robert Hetzron. 2013. The Semitic Languages. Routledge. Jui-Ting Huang, Jinyu Li, Dong Yu, Li Deng, and n Gong. 2013. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In IEEE. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation. CoRR abs/1611.04558. 2002 Katharina Kann, Ryan Cotterell, and Hinrich Sch¨utze. 2016. Neural morphological analysis: Encodingdecoding canonical segments. In EMNLP. Katharina Kann and Hinrich Sch¨utze. 2016a. Singlemodel encoder-decoder with explicit morphological representation for reinflection. In ACL. Katharina Kann and Hinrich Sch¨utze. 2016b. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In ACL. Christo Kirov, John Sylak-Glassman, Roger Que, and David Yarowsky. 2016. Very-large scale parsing and normalization of wiktionary morphological paradigms. In LREC. Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. 2015. A simple way to initialize recurrent networks of rectified linear units. CoRR abs/1504.00941. M Paul Lewis, editor. 2009. Ethnologue: Languages of the World. SIL International, Dallas, Texas, 16 edition. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In ICLR. Karthik Narasimhan, Damianos Karakos, Richard Schwartz, Stavros Tsakalidis, and Regina Barzilay. 2014. Morphological segmentation for keyword spotting. In EMNLP. Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In ACL. Garrett Nicolai, Colin Cherry, and Grzegorz Kondrak. 2015. Inflection generation as discriminative string transduction. In NAACL. J Scott Olsson, Douglas W Oard, and Jan Hajiˇc. 2005. Cross-language text classification. In ACM SIGIR. Sebastian Pad´o and Mirella Lapata. 2005. Crosslinguistic projection of role-semantic information. In HLT/EMNLP. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In LREC. Sunita Sarawagi and William W Cohen. 2004. Semimarkov conditional random fields for information extraction. In NIPS. Wolfgang Seeker and ¨Ozlem C¸ etino˘glu. 2015. A graphbased lattice dependency parser for joint morphological segmentation and syntactic analysis. TACL 3:359–373. Lei Shi, Rada Mihalcea, and Mingjun Tian. 2010. Cross language text classification by model translation and semi-supervised learning. In EMNLP. Benjamin Snyder and Regina Barzilay. 2008a. Crosslingual propagation for morphological analysis. In AAAI. Benjamin Snyder and Regina Barzilay. 2008b. Unsupervised multilingual learning for morphological segmentation. In ACL-HLT. Anders Søgaard. 2011. Data point selection for crosslanguage adaptation of dependency parsers. In ACLHLT. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS. John Sylak-Glassman. 2016. The composition and use of the universal morphological feature schema (unimorph schema). Technical report, Department of Computer Science, Johns Hopkins University. John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015. A language-independent feature schema for inflectional morphology. In ACLIJCNLP. Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W Black, Lori Levin, and Chris Dyer. 2016. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. In NAACL-HLT. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In NIPS. Mengqiu Wang and Christopher D Manning. 2014a. Cross-lingual projected expectation regularization for weakly supervised learning. TACL 2:55–66. Mengqiu Wang and Christopher D Manning. 2014b. Cross-lingual pseudo-projected expectation regularization for weakly supervised learning. TACL 2:55– 66. Yadollah Yaghoobzadeh and Hinrich Sch¨utze. 2015. Corpus-level fine-grained entity typing using contextual information. In EMNLP. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In HLT. Matthew D Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701. Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In NAACL-HLT. 2003
2017
182
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2004–2015 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1183 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2004–2015 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1183 Morphological Inflection Generation with Hard Monotonic Attention Roee Aharoni & Yoav Goldberg Computer Science Department Bar-Ilan University Ramat-Gan, Israel {roee.aharoni,yoav.goldberg}@gmail.com Abstract We present a neural model for morphological inflection generation which employs a hard attention mechanism, inspired by the nearly-monotonic alignment commonly found between the characters in a word and the characters in its inflection. We evaluate the model on three previously studied morphological inflection generation datasets and show that it provides state of the art results in various setups compared to previous neural and nonneural approaches. Finally we present an analysis of the continuous representations learned by both the hard and soft attention (Bahdanau et al., 2015) models for the task, shedding some light on the features such models extract. 1 Introduction Morphological inflection generation involves generating a target word (e.g. “h¨artestem”, the German word for “hardest”), given a source word (e.g. “hart”, the German word for “hard”) and the morpho-syntactic attributes of the target (POS=adjective, gender=masculine, type=superlative, etc.). The task is important for many down-stream NLP tasks such as machine translation, especially for dealing with data sparsity in morphologically rich languages where a lemma can be inflected into many different word forms. Several studies have shown that translating into lemmas in the target language and then applying inflection generation as a post-processing step is beneficial for phrase-based machine translation (Minkov et al., 2007; Toutanova et al., 2008; Clifton and Sarkar, 2011; Fraser et al., 2012; Chahuneau et al., 2013) and more recently for neural machine translation (Garc´ıa-Mart´ınez et al., 2016). The task was traditionally tackled with hand engineered finite state transducers (FST) (Koskenniemi, 1983; Kaplan and Kay, 1994) which rely on expert knowledge, or using trainable weighted finite state transducers (Mohri et al., 1997; Eisner, 2002) which combine expert knowledge with datadriven parameter tuning. Many other machinelearning based methods (Yarowsky and Wicentowski, 2000; Dreyer and Eisner, 2011; Durrett and DeNero, 2013; Hulden et al., 2014; Ahlberg et al., 2015; Nicolai et al., 2015) were proposed for the task, although with specific assumptions about the set of possible processes that are needed to create the output sequence. More recently, the task was modeled as neural sequence-to-sequence learning over character sequences with impressive results (Faruqui et al., 2016). The vanilla encoder-decoder models as used by Faruqui et al. compress the input sequence to a single, fixed-sized continuous representation. Instead, the soft-attention based sequence to sequence learning paradigm (Bahdanau et al., 2015) allows directly conditioning on the entire input sequence representation, and was utilized for morphological inflection generation with great success (Kann and Sch¨utze, 2016b,a). However, the neural sequence-to-sequence models require large training sets in order to perform well: their performance on the relatively small CELEX dataset is inferior to the latent variable WFST model of Dreyer et al. (2008). Interestingly, the neural WFST model by Rastogi et al. (2016) also suffered from the same issue on the CELEX dataset, and surpassed the latent variable model only when given twice as much data to train on. We propose a model which handles the above issues by directly modeling an almost monotonic 2004 alignment between the input and output character sequences, which is commonly found in the morphological inflection generation task (e.g. in languages with concatenative morphology). The model consists of an encoder-decoder neural network with a dedicated control mechanism: in each step, the model attends to a single input state and either writes a symbol to the output sequence or advances the attention pointer to the next state from the bi-directionally encoded sequence, as described visually in Figure 1. This modeling suits the natural monotonic alignment between the input and output, as the network learns to attend to the relevant inputs before writing the output which they are aligned to. The encoder is a bi-directional RNN, where each character in the input word is represented using a concatenation of a forward RNN and a backward RNN states over the word’s characters. The combination of the bi-directional encoder and the controllable hard attention mechanism enables to condition the output on the entire input sequence. Moreover, since each character representation is aware of the neighboring characters, nonmonotone relations are also captured, which is important in cases where segments in the output word are a result of long range dependencies in the input word. The recurrent nature of the decoder, together with a dedicated feedback connection that passes the last prediction to the next decoder step explicitly, enables the model to also condition the current output on all the previous outputs at each prediction step. The hard attention mechanism allows the network to jointly align and transduce while using a focused representation at each step, rather then the weighted sum of representations used in the soft attention model. This makes our model Resolution Preserving (Kalchbrenner et al., 2016) while also keeping decoding time linear in the output sequence length rather than multiplicative in the input and output lengths as in the softattention model. In contrast to previous sequenceto-sequence work, we do not require the training procedure to also learn the alignment. Instead, we use a simple training procedure which relies on independently learned character-level alignments, from which we derive gold transduction+control sequences. The network can then be trained using straightforward cross-entropy loss. To evaluate our model, we perform extensive experiments on three previously studied morphological inflection generation datasets: the CELEX dataset (Baayen et al., 1993), the Wiktionary dataset (Durrett and DeNero, 2013) and the SIGMORPHON2016 dataset (Cotterell et al., 2016). We show that while our model is on par with or better than the previous neural and non-neural state-of-the-art approaches, it also performs significantly better with very small training sets, being the first neural model to surpass the performance of the weighted FST model with latent variables which was specifically tailored for the task by Dreyer et al. (2008). Finally, we analyze and compare our model and the soft attention model, showing how they function very similarly with respect to the alignments and representations they learn, in spite of our model being much simpler. This analysis also sheds light on the representations such models learn for the morphological inflection generation task, showing how they encode specific features like a symbol’s type and the symbol’s location in a sequence. To summarize, our contributions in this paper are three-fold: 1. We present a hard attention model for nearlymonotonic sequence to sequence learning, as common in the morphological inflection setting. 2. We evaluate the model on the task of morphological inflection generation, establishing a new state of the art on three previouslystudied datasets for the task. 3. We perform an analysis and comparison of our model and the soft-attention model, shedding light on the features such models extract for the inflection generation task. 2 The Hard Attention Model 2.1 Motivation We would like to transduce an input sequence, x1:n ∈Σ∗ x into an output sequence, y1:m ∈Σ∗ y, where Σx and Σy are the input and output vocabularies, respectively. Imagine a machine with read-only random access to the encoding of the input sequence, and a single pointer that determines the current read location. We can then model sequence transduction as a series of pointer movement and write operations. If we assume the alignment is monotone, the machine can be simpli2005 fied: the memory can be read in sequential order, where the pointer movement is controlled by a single “move forward” operation (step) which we add to the output vocabulary. We implement this behavior using an encoder-decoder neural network, with a control mechanism which determines in each step of the decoder whether to write an output symbol or promote the attention pointer the next element of the encoded input. 2.2 Model Definition In prediction time, we seek the output sequence y1:m ∈Σ∗ y, for which: y1:m = arg max y′∈Σ∗y p(y′|x1:n, f) (1) Where x ∈Σ∗ x is the input sequence and f = {f1, . . . , fl} is a set of attributes influencing the transduction task (in the inflection generation task these would be the desired morpho-syntactic attributes of the output sequence). Given a nearlymonotonic alignment between the input and the output, we replace the search for a sequence of letters with a sequence of actions s1:q ∈Σ∗ s, where Σs = Σy ∪{step}. This sequence is a series of step and write actions required to go from x1:n to y1:m according to the monotonic alignment between them (we will elaborate on the deterministic process of getting s1:q from a monotonic alignment between x1:n to y1:m in section 2.4). In this case we define: 1 s1:q = arg max s′∈Σ∗s p(s′|x1:n, f) = arg max s′∈Σ∗s Y s′ i∈s′ p(s′ i|s′ 1 . . . s′ i−1, x1:n, f) (2) which we can estimate using a neural network: s1:q = arg max s′∈Σ∗s NN(x1:n, f, Θ) (3) where the network’s parameters Θ are learned using a set of training examples. We will now describe the network architecture. 1We note that our model (Eq. 2) solves a different objective than (Eq 1), as it searches for the best derivation and not the best sequence. In order to accurately solve (1) we would need to marginalize over the different derivations leading to the same sequence, which is computationally challenging. However, as we see in the experiments section, the bestderivation approximation is effective in practice. Figure 1: The hard attention network architecture. A round tip expresses concatenation of the inputs it receives. The attention is promoted to the next input element once a step action is predicted. 2.3 Network Architecture Notation We use bold letters for vectors and matrices. We treat LSTM as a parameterized function LSTMθ(x1 . . . xn) mapping a sequence of input vectors x1 . . . xn to a an output vector hn. The equations for the LSTM variant we use are detailed in the supplementary material of this paper. Encoder For every element in the input sequence: x1:n = x1 . . . xn, we take the corresponding embedding: ex1 . . . exn, where: exi ∈RE. These embeddings are parameters of the model which will be learned during training. We then feed the embeddings into a bi-directional LSTM encoder (Graves and Schmidhuber, 2005) which results in a sequence of vectors: x1:n = x1 . . . xn, where each vector xi ∈ R2H is a concatenation of: LSTMforward(ex1, ex2, . . . exi) and LSTMbackward(exn, exn−1 . . . exi), the forward LSTM and the backward LSTM outputs when fed with exi. Decoder Once the input sequence is encoded, we feed the decoder RNN, LSTMdec, with three inputs at each step: 1. The current attended input, xa ∈R2H, initialized with the first element of the encoded sequence, x1. 2. A set of embeddings for the attributes that influence the generation process, concatenated to a single vector: f = [f1 . . . fl] ∈RF·l. 3. si−1 ∈RE, which is an embedding for the 2006 predicted output symbol in the previous decoder step. Those three inputs are concatenated into a single vector zi = [xa, f, si−1] ∈R2H+F·l+E, which is fed into the decoder, providing the decoder output vector: LSTMdec(z1 . . . zi) ∈RH. Finally, to model the distribution over the possible actions, we project the decoder output to a vector of |Σs| elements, followed by a softmax layer: p(si = c) = softmax c(W · LSTMdec(z1 . . . zi) + b) (4) Control Mechanism When the most probable action is step, the attention is promoted so xa contains the next encoded input representation to be used in the next step of the decoder. The process is demonstrated visually in Figure 1. 2.4 Training the Model For every example: (x1:n, y1:m, f) in the training data, we should produce a sequence of step and write actions s1:q to be predicted by the decoder. The sequence is dependent on the alignment between the input and the output: ideally, the network will attend to all the input characters aligned to an output character before writing it. While recent work in sequence transduction advocate jointly training the alignment and the decoding mechanisms (Bahdanau et al., 2015; Yu et al., 2016), we instead show that in our case it is worthwhile to decouple these stages and learn a hard alignment beforehand, using it to guide the training of the encoder-decoder network and enabling the use of correct alignments for the attention mechanism from the beginning of the network training phase. Thus, our training procedure consists of three stages: learning hard alignments, deriving oracle actions from the alignments, and learning a neural transduction model given the oracle actions. Learning Hard Alignments We use the character alignment model of Sudoh et al. (2013), based on a Chinese Restaurant Process which weights single alignments (character-to-character) in proportion to how many times such an alignment has been seen elsewhere out of all possible alignments. The aligner implementation we used produces either 0to-1, 1-to-0 or 1-to-1 alignments. Deriving Oracle Actions We infer the sequence of actions s1:q from the alignments by the deterministic procedure described in Algorithm 1. An example of an alignment with the resulting oracle action sequence is shown in Figure 2, where a4 is a 0-to-1 alignment and the rest are 1-to-1 alignments. Figure 2: Top: an alignment between a lemma x1:n and an inflection y1:m as predicted by the aligner. Bottom: s1:q, the sequence of actions to be predicted by the network, as produced by Algorithm 1 for the given alignment. Algorithm 1 Generates the oracle action sequence s1:q from the alignment between x1:n and y1:m Require: a, the list of either 1-to-1, 1-to-0 or 0to-1 alignments between x1:n and y1:m 1: Initialize s as an empty sequence 2: for each ai = (xai, yai) ∈a do 3: if ai is a 1-to-0 alignment then 4: s.append(step) 5: else 6: s.append(yai) 7: if ai+1 is not a 0-to-1 alignment then 8: s.append(step) return s This procedure makes sure that all the source input elements aligned to an output element are read (using the step action) before writing it. Learning a Neural Transduction Model The network is trained to mimic the actions of the oracle, and at inference time, it will predict the actions according to the input. We train it using a conventional cross-entropy loss function per example: L(x1:n, y1:m, f, Θ) = − X sj∈s1:q log softmax sj(d), d = W · LSTMdec(z1 . . . zi) + b (5) Transition System An alternative view of our model is that of a transition system with ADVANCE and WRITE(CH) actions, where the oracle is derived from a given hard alignment, the input is encoded using a biRNN, and the next action is determined by an RNN over the previous inputs and actions. 2007 3 Experiments We perform extensive experiments with three previously studied morphological inflection generation datasets to evaluate our hard attention model in various settings. In all experiments we compare our hard attention model to the best performing neural and non-neural models which were previously published on those datasets, and to our implementation of the global (soft) attention model presented by Luong et al. (2015) which we train with identical hyper-parameters as our hardattention model. The implementation details for our models are described in the supplementary material section of this paper. The source code and data for our models is available on github.2 CELEX Our first evaluation is on a very small dataset, to see if our model indeed avoids the tendency to overfit with small training sets. We report exact match accuracy on the German inflection generation dataset compiled by Dreyer et al. (2008) from the CELEX database (Baayen et al., 1993). The dataset includes only 500 training examples for each of the four inflection types: 13SIA→13SKE, 2PIE→13PKE, 2PKE→z, and rP→pA which we refer to as 13SIA, 2PIE, 2PKE and rP, respectively.3 We first compare our model to three competitive models from the literature that reported results on this dataset: the Morphological Encoder-Decoder (MED) of Kann and Sch¨utze (2016a) which is based on the soft-attention model of Bahdanau et al. (2015), the neural-weighted FST of Rastogi et al. (2016) which uses stacked bi-directional LSTM’s to weigh its arcs (NWFST), and the model of Dreyer et al. (2008) which uses a weighted FST with latent-variables structured particularly for morphological string transduction tasks (LAT). Following previous reports on this dataset, we use the same data splits as Dreyer et al. (2008), dividing the data for each inflection type into five folds, each consisting of 500 training, 1000 development and 1000 test examples. We train a separate model for each fold and report exact match accuracy, averaged over the five folds. 2https://github.com/roeeaharoni/ morphological-reinflection 3The acronyms stand for: 13SIA=1st/3rd person, singular, indefinite, past;13SKE=1st/3rd person, subjunctive, present; 2PIE=2nd person, plural, indefinite, present;13PKE=1st/3rd person, plural, subjunctive, present; 2PKE=2nd person, plural, subjunctive, present; z=infinitive; rP=imperative, plural; pA=past participle. Wiktionary To neutralize the negative effect of very small training sets on the performance of the different learning approaches, we also evaluate our model on the dataset created by Durrett and DeNero (2013), which contains up to 360k training examples per language. It was built by extracting Finnish, German and Spanish inflection tables from Wiktionary, used in order to evaluate their system based on string alignments and a semi-CRF sequence classifier with linguistically inspired features, which we use a baseline. We also used the dataset expansion made by Nicolai et al. (2015) to include French and Dutch inflections as well. Their system also performs an alignand-transduce approach, extracting rules from the aligned training set and applying them in inference time with a proprietary character sequence classifier. In addition to those systems we also compare to the results of the recent neural approach of Faruqui et al. (2016), which did not use an attention mechanism, and Yu et al. (2016), which coupled the alignment and transduction tasks. SIGMORPHON As different languages show different morphological phenomena, we also experiment with how our model copes with these various phenomena using the morphological inflection dataset from the SIGMORPHON2016 shared task (Cotterell et al., 2016). Here the training data consists of ten languages, with five morphological system types (detailed in Table 3): Russian (RU), German (DE), Spanish (ES), Georgian (GE), Finnish (FI), Turkish (TU), Arabic (AR), Navajo (NA), Hungarian (HU) and Maltese (MA) with roughly 12,800 training and 1600 development examples per language. We compare our model to two soft attention baselines on this dataset: MED (Kann and Sch¨utze, 2016b), which was the best participating system in the shared task, and our implementation of the global (soft) attention model presented by Luong et al. (2015). 4 Results In all experiments, for both the hard and soft attention models we implemented, we report results using an ensemble of 5 models with different random initializations by using majority voting on the final sequences the models predicted, as proposed by Kann and Sch¨utze (2016a). This was done to perform fair comparison to the models of Kann and Sch¨utze (2016a,b); Faruqui et al. (2016) which we compare to, that also perform a similar ensem2008 13SIA 2PIE 2PKE rP Avg. MED (Kann and Sch¨utze, 2016a) 83.9 95 87.6 84 87.62 NWFST (Rastogi et al., 2016) 86.8 94.8 87.9 81.1 87.65 LAT (Dreyer et al., 2008) 87.5 93.4 87.4 84.9 88.3 Soft 83.1 93.8 88 83.2 87 Hard 85.8 95.1 89.5 87.2 89.44 Table 1: Results on the CELEX dataset DE-N DE-V ES-V FI-NA FI-V FR-V NL-V Avg. Durrett and DeNero (2013) 88.31 94.76 99.61 92.14 97.23 98.80 90.50 94.47 Nicolai et al. (2015) 88.6 97.50 99.80 93.00 98.10 99.20 96.10 96.04 Faruqui et al. (2016) 88.12 97.72 99.81 95.44 97.81 98.82 96.71 96.34 Yu et al. (2016) 87.5 92.11 99.52 95.48 98.10 98.65 95.90 95.32 Soft 88.18 95.62 99.73 93.16 97.74 98.79 96.73 95.7 Hard 88.87 97.35 99.79 95.75 98.07 99.04 97.03 96.55 Table 2: Results on the Wiktionary datasets suffixing+stem changes circ. suffixing+agg.+v.h. c.h. templatic RU DE ES GE FI TU HU NA AR MA Avg. MED 91.46 95.8 98.84 98.5 95.47 98.93 96.8 91.48 99.3 88.99 95.56 Soft 92.18 96.51 98.88 98.88 96.99 99.37 97.01 95.41 99.3 88.86 96.34 Hard 92.21 96.58 98.92 98.12 95.91 97.99 96.25 93.01 98.77 88.32 95.61 Table 3: Results on the SIGMORPHON 2016 morphological inflection dataset. The text above each language lists the morphological phenomena it includes: circ.=circumfixing, agg.=agglutinative, v.h.=vowel harmony, c.h.=consonant harmony bling technique. On the low resource setting (CELEX), our hard attention model significantly outperforms both the recent neural models of Kann and Sch¨utze (2016a) (MED) and Rastogi et al. (2016) (NWFST) and the morphologically aware latent variable model of Dreyer et al. (2008) (LAT), as detailed in Table 1. In addition, it significantly outperforms our implementation of the soft attention model (Soft). It is also, to our knowledge, the first model that surpassed in overall accuracy the latent variable model on this dataset. We attribute our advantage over the soft attention models to the ability of the hard attention control mechanism to harness the monotonic alignments found in the data. The advantage over the FST models may be explained by our conditioning on the entire output history which is not available in those models. Figure 3 plots the train-set and dev-set accuracies of the soft and hard attention models as a function of the training epoch. While both models perform similarly on the train-set (with the soft attention model fitting it slightly faster), the hard attention model performs significantly better on the dev-set. This shows the soft attention model’s tendency to overfit on the small dataset, as it is not enforcing the monotonic assumption of the hard attention model. On the large training set experiments (Wiktionary), our model is the best performing model on German verbs, Finnish nouns/adjectives and Dutch verbs, resulting in the highest reported average accuracy across all inflection types when compared to the four previous neural and nonneural state of the art baselines, as detailed in Table 2. This shows the robustness of our model also with large amounts of training examples, and the advantage the hard attention mechanism provides over the encoder-decoder approach of Faruqui et al. (2016) which does not employ an attention mechanism. Our model is also significantly more accurate than the model of Yu et al. (2016), which shows the advantage of using independently learned alignments to guide the network’s attention from the beginning of the training process. While our soft-attention implementation outperformed the models of Yu et al. (2016) and Durrett and DeNero (2013), it still performed inferiorly to the hard attention model. As can be seen in Table 3, on the SIGMORPHON 2016 dataset our model performs 2009 0 10 20 30 40 0 0.5 1 epoch accuracy soft-train hard-train soft-dev hard-dev Figure 3: Learning curves for the soft and hard attention models on the first fold of the CELEX dataset Figure 4: A comparison of the alignments as predicted by the soft attention (left) and the hard attention (right) models on examples from CELEX. better than both soft-attention baselines for the suffixing+stem-change languages (Russian, German and Spanish) and is slightly less accurate than our implementation of the soft attention model on the rest of the languages, which is now the best performing model on this dataset to our knowledge. We explain this by looking at the languages from a linguistic typology point of view, as detailed in Cotterell et al. (2016). Since Russian, German and Spanish employ a suffixing morphology with internal stem changes, they are more suitable for monotonic alignment as the transformations they need to model are the addition of suffixes and changing characters in the stem. The rest of the languages in the dataset employ more context sensitive morphological phenomena like vowel harmony and consonant harmony, which require to model long range dependencies in the input sequence which better suits the soft attention mechanism. While our implementation of the soft attention model and MED are very similar modelwise, we hypothesize that our soft attention model results are better due to the fact that we trained the model for 100 epochs and picked the best performing model on the development set, while the MED system was trained for a fixed amount of 20 epochs (although trained on more data – both train and development sets). 5 Analysis The Learned Alignments In order to see if the alignments predicted by our model fit the monotonic alignment structure found in the data, and whether are they more suitable for the task when compared to the alignments found by the soft attention model, we examined alignment predictions of the two models on examples from the development portion of the CELEX dataset, as depicted in Figure 4. First, we notice the alignments found by the soft attention model are also monotonic, supporting our modeling approach for the task. Figure 4 (bottom-right) also shows how the hardattention model performs deletion (legte→lege) by predicting a sequence of two step operations. Another notable morphological transformation is the one-to-many alignment, found in the top example: flog→fliege, where the model needs to transform a character in the input, o, to two characters in the output, ie. This is performed by two consecutive write operations after the step operation of the relevant character to be replaced. Notice that in this case, the soft attention model performs a different alignment by aligning the character i to o and the character g to the sequence eg, which is not the expected alignment in this case from a linguistic point of view. The Learned Representations How does the soft-attention model manage to learn nearlyperfect monotonic alignments? Perhaps the the network learns to encode the sequential position as part of its encoding of an input element? More generally, what information is encoded by the soft and hard alignment encoders? We selected 500 random encoded characters-in-context from input 2010 (a) Colors indicate which character is encoded. (b) Colors indicate which character is encoded. (c) Colors indicate the character’s position. (d) Colors indicate the character’s position. Figure 5: SVD dimension reduction to 2D of 500 character representations in context from the encoder, for both the soft attention (top) and hard attention (bottom) models. words in the CELEX development set, where every encoded representation is a vector in R200. Since those vectors are outputs from the bi-LSTM encoders of the models, every vector of this form carries information of the specific character with its entire context. We project these encodings into 2-D using SVD and plot them twice, each time using a different coloring scheme. We first color each point according to the character it represents (Figures 5a, 5b). In the second coloring scheme (Figures 5c, 5d), each point is colored according to the character’s sequential position in the word it came from, blue indicating positions near the beginning of the word, and red positions near its end. While both models tend to cluster representations for similar characters together (Figures 5a, 5b), the hard attention model tends to have much more isolated character clusters. Figures 5c, 5d show that both models also tend to learn representations which are sensitive to the position of the character, although it seems that here the soft attention model is more sensitive to this information as its coloring forms a nearly-perfect red-to-blue transition on the X axis. This may be explained by the soft-attention mechanism encouraging the encoder to encode positional information in the input representations, which may help it to predict better attention scores, and to avoid collisions when computing the weighted sum of representations for the context vector. In contrast, our hardattention model has other means of obtaining the position information in the decoder using the step actions, and for that reason it does not encode it as strongly in the representations of the inputs. This behavior may allow it to perform well even with fewer examples, as the location information is represented more explicitly in the model using the step actions. 6 Related Work Many previous works on inflection generation used machine learning methods (Yarowsky and Wicentowski, 2000; Dreyer and Eisner, 2011; Durrett and DeNero, 2013; Hulden et al., 2014; Ahlberg et al., 2015; Nicolai et al., 2015) with assumptions about the set of possible processes needed to create the output word. Our work was mainly inspired by Faruqui et al. (2016) which trained an independent encoder-decoder neural 2011 network for every inflection type in the training data, alleviating the need for feature engineering. Kann and Sch¨utze (2016b,a) tackled the task with a single soft attention model (Bahdanau et al., 2015) for all inflection types, which resulted in the best submission at the SIGMORPHON 2016 shared task (Cotterell et al., 2016). In another closely related work, Rastogi et al. (2016) model the task with a WFST in which the arc weights are learned by optimizing a global loss function over all the possible paths in the state graph, while modeling contextual features with bi-directional LSTMS. This is similar to our approach, where instead of learning to mimic a single greedy alignment as we do, they sum over all possible alignments. While not committing to a single greedy alignment could in theory be beneficial, we see in Table 1 that—at least for the low resource scenario—our greedy approach is more effective in practice. Another recent work (Kann et al., 2016) proposed performing neural multi-source morphological reinflection, generating an inflection from several source forms of a word. Previous works on neural sequence transduction include the RNN Transducer (Graves, 2012) which uses two independent RNN’s over monotonically aligned sequences to predict a distribution over the possible output symbols in each step, including a null symbol to model the alignment. Yu et al. (2016) improved this by replacing the null symbol with a dedicated learned transition probability. Both models are trained using a forwardbackward approach, marginalizing over all possible alignments. Our model differs from the above by learning the alignments independently, thus enabling a dependency between the encoder and decoder. While providing better results than Yu et al. (2016), this also simplifies the model training using a simple cross-entropy loss. A recent work by Raffel et al. (2017) jointly learns the hard monotonic alignments and transduction while maintaining the dependency between the encoder and the decoder. Jaitly et al. (2015) proposed the Neural Transducer model, which is also trained on external alignments. They divide the input into blocks of a constant size and perform soft attention separately on each block. Lu et al. (2016) used a combination of an RNN encoder with a CRF layer to model the dependencies in the output sequence. An interesting comparison between ”traditional” sequence transduction models (Bisani and Ney, 2008; Jiampojamarn et al., 2010; Novak et al., 2012) and encoder-decoder neural networks for monotone string transduction tasks was done by Schnober et al. (2016), showing that in many cases there is no clear advantage to one approach over the other. Regarding task-specific improvements to the attention mechanism, a line of work on attentionbased speech recognition (Chorowski et al., 2015; Bahdanau et al., 2016) proposed adding location awareness by using the previous attention weights when computing the next ones, and preventing the model from attending on too many or too few inputs using “sharpening” and “smoothing” techniques on the attention weight distributions. Cohn et al. (2016) offered several changes to the attention score computation to encourage wellknown modeling biases found in traditional machine translation models like word fertility, position and alignment symmetry. Regarding the utilization of independent alignment models for training attention-based networks, Mi et al. (2016) showed that the distance between the attentioninfused alignments and the ones learned by an independent alignment model can be added to the networks’ training objective, resulting in an improved translation and alignment quality. 7 Conclusion We presented a hard attention model for morphological inflection generation. The model employs an explicit alignment which is used to train a neural network to perform transduction by decoding with a hard attention mechanism. Our model performs better than previous neural and non-neural approaches on various morphological inflection generation datasets, while staying competitive with dedicated models even with very few training examples. It is also computationally appealing as it enables linear time decoding while staying resolution preserving, i.e. not requiring to compress the input sequence to a single fixedsized vector. Future work may include applying our model to other nearly-monotonic alignand-transduce tasks like abstractive summarization, transliteration or machine translation. Acknowledgments This work was supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI), and The Israeli Science Foundation (grant number 1555/15). 2012 References Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2015. Paradigm classification in supervised learning of morphology. In NAACL HLT 2015. pages 1024– 1029. R Harald Baayen, Richard Piepenbrock, and Rijn van H. 1993. The {CELEX} lexical data base on {CDROM} . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. Proceedings of the International Conference on Learning Representations (ICLR) . Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Yoshua Bengio, et al. 2016. End-to-end attentionbased large vocabulary speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pages 4945–4949. Maximilian Bisani and Hermann Ney. 2008. Jointsequence models for grapheme-to-phoneme conversion. Speech Commun. 50(5):434–451. https://doi.org/10.1016/j.specom.2008.01.002. Victor Chahuneau, Eva Schlinger, Noah A. Smith, and Chris Dyer. 2013. Translating into morphologically rich languages with synthetic phrases. In EMNLP. pages 1677–1687. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Advances in Neural Information Processing Systems 28, pages 577–585. Ann Clifton and Anoop Sarkar. 2011. Combining morpheme-based machine translation with postprocessing morpheme prediction. In ACL. pages 32–42. Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 876–885. http://www.aclweb.org/anthology/N16-1102. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task— morphological reinflection. In Proceedings of the 2016 Meeting of SIGMORPHON. Markus Dreyer and Jason Eisner. 2011. Discovering morphological paradigms from plain text using a dirichlet process mixture model. In EMNLP. pages 616–627. Markus Dreyer, Jason R Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In Proceedings of the conference on empirical methods in natural language processing. pages 1080–1089. Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In NAACL HLT 2013. pages 1185–1195. Jason Eisner. 2002. Parameter estimation for probabilistic finite-state transducers. In Proceedings of the 40th annual meeting on Association for Computational Linguistics. pages 1–8. Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In NAACL HLT 2016. Alexander M. Fraser, Marion Weller, Aoife Cahill, and Fabienne Cap. 2012. Modeling inflection and wordformation in smt. In EACL. pages 664–674. Mercedes Garc´ıa-Mart´ınez, Lo¨ıc Barrault, and Fethi Bougares. 2016. Factored neural machine translation. arXiv preprint arXiv:1609.04621 . A. Graves and J. Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks 18(5-6):602–610. Alex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711 . Mans Hulden, Markus Forsberg, and Malin Ahlberg. 2014. Semi-supervised learning of morphological paradigms and lexicons. In EACL. pages 569–578. Navdeep Jaitly, David Sussillo, Quoc V Le, Oriol Vinyals, Ilya Sutskever, and Samy Bengio. 2015. A neural transducer. arXiv preprint arXiv:1511.04868 . Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2010. Integrating joint n-gram features into a discriminative training framework. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Los Angeles, California, pages 697–700. http://www.aclweb.org/anthology/N10-1103. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099 . Katharina Kann, Ryan Cotterell, and Hinrich Sch¨utze. 2016. Neural multi-source morphological reinflection. EACL 2017 . 2013 Katharina Kann and Hinrich Sch¨utze. 2016a. Med: The lmu system for the sigmorphon 2016 shared task on morphological reinflection. Katharina Kann and Hinrich Sch¨utze. 2016b. Singlemodel encoder-decoder with explicit morphological representation for reinflection. In ACL. Ronald M. Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computational Linguistics 20(3):331–378. Kimmo Koskenniemi. 1983. Two-level morphology: A general computational model of word-form recognition and production. Technical report. Liang Lu, Lingpeng Kong, Chris Dyer, Noah A Smith, and Steve Renals. 2016. Segmental recurrent neural networks for end-to-end speech recognition. arXiv preprint arXiv:1603.00223 . Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1412– 1421. http://aclweb.org/anthology/D15-1166. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 2283–2288. https://aclweb.org/anthology/D16-1249. Einat Minkov, Kristina Toutanova, and Hisami Suzuki. 2007. Generating complex morphology for machine translation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Computational Linguistics, Prague, Czech Republic, pages 128– 135. http://www.aclweb.org/anthology/P07-1017. Mehryar Mohri, Fernando Pereira, and Michael Riley. 1997. A rational design for a weighted finite-state transducer library. In International Workshop on Implementing Automata. pages 144–158. Garrett Nicolai, Colin Cherry, and Grzegorz Kondrak. 2015. Inflection generation as discriminative string transduction. In NAACL HLT 2015. pages 922–931. Josef R. Novak, Nobuaki Minematsu, and Keikichi Hirose. 2012. WFST-based graphemeto-phoneme conversion: Open source tools for alignment, model-building and decoding. In Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing. Association for Computational Linguistics, Donostia–San Sebastin, pages 45–49. http://www.aclweb.org/anthology/W12-6208. C. Raffel, T. Luong, P. J. Liu, R. J. Weiss, and D. Eck. 2017. Online and Linear-Time Attention by Enforcing Monotonic Alignments. arXiv preprint arXiv:1704.00784 . Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neural context. In Proc. of NAACL. Carsten Schnober, Steffen Eger, Erik-Lˆan Do Dinh, and Iryna Gurevych. 2016. Still not there? comparing traditional sequence-to-sequence models to encoderdecoder neural networks on monotone string translation tasks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 1703– 1714. http://aclweb.org/anthology/C16-1160. Katsuhito Sudoh, Shinsuke Mori, and Masaaki Nagata. 2013. Noise-aware character alignment for bootstrapping statistical machine transliteration from bilingual corpora. In EMNLP 2013. pages 204–209. Kristina Toutanova, Hisami Suzuki, and Achim Ruopp. 2008. Applying morphology generation models to machine translation. In ACL. pages 514–522. David Yarowsky and Richard Wicentowski. 2000. Minimally supervised morphological analysis by multimodal alignment. In ACL. Lei Yu, Jan Buys, and Phil Blunsom. 2016. Online segment to segment neural transduction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1307–1316. https://aclweb.org/anthology/D161138. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . 2014 Supplementary Material Training Details, Implementation and Hyper Parameters To train our models, we used the train portion of the datasets as-is and evaluated on the test portion the model which performed best on the development portion of the dataset, without conducting any specific pre-processing steps on the data. We train the models for a maximum of 100 epochs over the training set. To avoid long training time, we trained the model for 20 epochs for datasets larger than 50k examples, and for 5 epochs for datasets larger than 200k examples. The models were implemented using the python bindings of the dynet toolkit.4 We trained the network by optimizing the expected output sequence likelihood using crossentropy loss as mentioned in equation 5. For optimization we used ADADELTA (Zeiler, 2012) without regularization. We updated the weights after every example (i.e. mini-batches of size 1). We used the dynet toolkit implementation of an LSTM network with two layers for all models, each having 100 entries in both the encoder and decoder. The character embeddings were also vectors with 100 entries for the CELEX experiments, and with 300 entries for the SIGMORPHON and Wiktionary experiments. The morpho-syntactic attribute embeddings were vectors of 20 entries in all experiments. We did not use beam search while decoding for both the hard and soft attention models as it is significantly slower and did not show clear improvement in previous experiments we conducted. For the character level alignment process we use the implementation provided by the organizers of the SIGMORPHON2016 shared task.5 LSTM Equations We used the LSTM variant implemented in the dynet toolkit, which corresponds to the following 4https://github.com/clab/dynet 5https://github.com/ryancotterell/ sigmorphon2016 equations: it = σ(Wixxt + Wihht−1 + Wicct−1 + bi) ft = σ(Wfxxt + Wfhft−1 + Wfcct−1 + bf) ec = tanh(Wcxxt + Wchht−1 + bc) ct = ct−1 ◦ft + ec ◦it ot = σ(Woxxt + Wohht−1 + Woxct + bo) ht = tanh(ct) ◦ot (6) 2015
2017
183
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2016–2027 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1184 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2016–2027 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1184 From Characters to Words to in Between: Do We Capture Morphology? Clara Vania and Adam Lopez Institute for Language, Cognition and Computation School of Informatics University of Edinburgh [email protected], [email protected] Abstract Words can be represented by composing the representations of subword units such as word segments, characters, and/or character n-grams. While such representations are effective and may capture the morphological regularities of words, they have not been systematically compared, and it is not understood how they interact with different morphological typologies. On a language modeling task, we present experiments that systematically vary (1) the basic unit of representation, (2) the composition of these representations, and (3) the morphological typology of the language modeled. Our results extend previous findings that character representations are effective across typologies, and we find that a previously unstudied combination of character trigram representations composed with bi-LSTMs outperforms most others. But we also find room for improvement: none of the character-level models match the predictive accuracy of a model with access to true morphological analyses, even when learned from an order of magnitude more data. 1 Introduction Continuous representations of words learned by neural networks are central to many NLP tasks (Cho et al., 2014; Chen and Manning, 2014; Dyer et al., 2015). However, directly mapping a finite set of word types to a continuous representation has well-known limitations. First, it makes a closed vocabulary assumption, enabling only generic out-of-vocabulary handling. Second, it cannot exploit systematic functional relationships in learning. For example, cat and cats stand in the same relationship as dog and dogs. While this relationship might be discovered for these specific frequent words, it does not help us learn that the same relationship also holds for the much rarer words sloth and sloths. These functional relationships reflect the fact that words are composed from smaller units of meaning, or morphemes. For instance, cats consists of two morphemes, cat and -s, with the latter shared by the words dogs and tarsiers. Modeling this effect is crucial for languages with rich morphology, where vocabulary sizes are larger, many more words are rare, and many more such functional relationships exist. Hence, some models produce word representations as a function of subword units obtained from morphological segmentation or analysis (Luong et al., 2013; Botha and Blunsom, 2014; Cotterell and Sch¨utze, 2015). A downside of these models is that they depend on morphological segmenters or analyzers. Morphemes typically have similar orthographic representations across words. For example, the morpheme -s is realized as -es in finches. Since this variation is limited, the general relationship between morphology and orthography can be exploited by composing the representations of characters (Ling et al., 2015; Kim et al., 2016), character n-grams (Sperr et al., 2013; Wieting et al., 2016; Bojanowski et al., 2016; Botha and Blunsom, 2014), bytes (Plank et al., 2016; Gillick et al., 2016), or combinations thereof (Santos and Zadrozny, 2014; Qiu et al., 2014). These models are compact, can represent rare and unknown words, and do not require morphological analyzers. They raise a provocative question: Does NLP benefit from models of morphology, or can they be replaced entirely by models of characters? The relative merits of word, subword. and character-level models are not fully understood because each new model has been compared on 2016 different tasks and datasets, and often compared against word-level models. A number of questions remain open: 1. How do representations based on morphemes compare with those based on characters? 2. What is the best way to compose subword representations? 3. Do character-level models capture morphology in terms of predictive utility? 4. How do different representations interact with languages of different morphological typologies? The last question is raised by Bender (2013): languages are typologically diverse, and the behavior of a model on one language may not generalize to others. Character-level models implicitly assume concatenative morphology, but many widely-spoken languages feature nonconcatenative morphology, and it is unclear how such models will behave on these languages. To answer these questions, we performed a systematic comparison across different models for the simple and ubiquitous task of language modeling. We present experiments that vary (1) the type of subword unit; (2) the composition function; and (3) morphological typology. To understand the extent to which character-level models capture true morphological regularities, we present oracle experiments using human morphological annotations instead of automatic morphological segments. Our results show that: 1. For most languages, character-level representations outperform the standard word representations. Most interestingly, a previously unstudied combination of character trigrams composed with bi-LSTMs performs best on the majority of languages. 2. Bi-LSTMs and CNNs are more effective composition functions than addition. 3. Character-level models learn functional relationships between orthographically similar words, but don’t (yet) match the predictive accuracy of models with access to true morphological analyses. 4. Character-level models are effective across a range of morphological typologies, but orthography influences their effectiveness. word tries morphemes try+s morphs tri+es morph. analysis try+VB+3rd+SG+Pres Table 1: The morphemes, morphs, and morphological analysis of tries. 2 Morphological Typology A morpheme is the smallest unit of meaning in a word. Some morphemes express core meaning (roots), while others express one or more dependent features of the core meaning, such as person, gender, or aspect. A morphological analysis identifies the lemma and features of a word. A morph is the surface realization of a morpheme (Morley, 2000), which may vary from word to word. These distinctions are shown in Table 1. Morphological typology classifies languages based on the processes by which morphemes are composed to form words. While most languages will exhibit a variety of such processes, for any given language, some processes are much more frequent than others, and we will broadly identify our experimental languages with these processes. When morphemes are combined sequentially, the morphology is concatenative. However, morphemes can also be composed by nonconcatenative processes. We consider four broad categories of both concatenative and nonconcatenative processes in our experiments. Fusional languages realize multiple features in a single concatenated morpheme. For example, English verbs can express number, person, and tense in a single morpheme: wanted (English) want + ed want + VB+1st+SG+Past Agglutinative languages assign one feature per morpheme. Morphemes are concatenated to form a word and the morpheme boundaries are clear. For example (Haspelmath, 2010): okursam (Turkish) oku+r+sa+m “read”+AOR+COND+1SG Root and Pattern Morphology forms words by inserting consonants and vowels of dependent morphemes into a consonantal root based on a given pattern. For example, the Arabic root ktb (“write”) produces (Roark and Sproat, 2007): katab “wrote” (Arabic) 2017 takaatab “wrote to each other” (Arabic) Reduplication is a process where a word form is produced by repeating part or all of the root to express new features. For example: anak “child” (Indonesian) anak-anak “children” (Indonesian) buah “fruit” (Indonesian) buah-buahan “various fruits” (Indonesian) 3 Representation Models We compare ten different models, varying subword units and composition functions that have commonly been used in recent work, but evaluated on various different tasks (Table 2). Given word w, we compute its representation w as: w = f(Ws, σ(w)) (1) where σ is a deterministic function that returns a sequence of subword units; Ws is a parameter matrix of representations for the vocabulary of subword units; and f is a composition function which takes σ(w) and Ws as input and returns w. All of the representations that we consider take this form, varying only in f and σ. 3.1 Subword Units We consider four variants of σ in Equation 1, each returning a different type of subword unit: character, character trigram, or one of two types of morph. Morphs are obtained from Morfessor (Smit et al., 2014) or a word segmentation based on Byte Pair Encoding (BPE; Gage (1994)), which has been shown to be effective for handling rare words in neural machine translation (Sennrich et al., 2016). BPE works by iteratively replacing frequent pairs of characters with a single unused character. For Morfessor, we use default parameters while for BPE we set the number of merge operations to 10,000.1 When we segment into character trigrams, we consider all trigrams in the word, including those covering notional beginning and end of word characters, as in Sperr et al. (2013). Example output of σ is shown in Table 3. 3.2 Composition Functions We use three variants of f in Eq. 1. The first constructs the representation w of word w by adding 1BPE takes a single parameter: the number of merge operations. We tried different parameter values (1k, 10k, 100k) and manually examined the resulting segmentation on the English dataset. Qualitatively, 10k gave the most plausible segmentation and we used this setting across all languages. the representations of its subwords s1, . . . , sn = σ(w), where the representation of si is vector si. w = n X i=1 si (2) The only subword unit that we don’t compose by addition is characters, since this will produce the same representation for many different words. Our second composition function is a bidirectional long-short-term memory (bi-LSTM), which we adapt based on its use in the characterlevel model of Ling et al. (2015) and its widespread use in NLP generally. Given si and the previous LSTM hidden state hi−1, an LSTM (Hochreiter and Schmidhuber, 1997) computes the following outputs for the subword at position i: hi = LSTM(si, hi−1) (3) ˆsi+1 = g(VT · hi) (4) where ˆsi+1 is the predicted target subword, g is the softmax function and V is a weight matrix. A bi-LSTM (Graves et al., 2005) combines the final state of an LSTM over the input sequence with one over the reversed input sequence. Given the hidden state produced from the final input of the forward LSTM, hfw n and the hidden state produced from the final input of the backward LSTM hbw 0 , we compute the word representation as: wt = Wf · hfw n + Wb · hbw 0 + b (5) where Wf, Wb, and b are parameter matrices and hfw n and hbw 0 are forward and backward LSTM states, respectively. The third composition function is a convolutional neural network (CNN) with highway layers, as in Kim et al. (2016). Let c1, . . . , ck be the sequence of characters of word w. The character embedding matrix is C ∈Rd×k, where the i-th column corresponds to the embeddings of ci. We first apply a narrow convolution between C and a filter F ∈Rd×n of width n to obtain a feature map f ∈Rk−n+1. In particular, the computation of the j-th element of f is defined as f[j] = tanh(⟨C[∗, j : j + n −1], F⟩+ b) (6) where ⟨A, B⟩= Tr(ABT ) is the Frobenius inner product and b is a bias. The CNN model applies filters of varying width, representing features 2018 Models Subword Unit(s) Composition Function Sperr et al. (2013) words, character n-grams addition Luong et al. (2013) morphs (Morfessor) recursive NN Botha and Blunsom (2014) words, morphs (Morfessor) addition Qiu et al. (2014) words, morphs (Morfessor) addition Santos and Zadrozny (2014) words, characters CNN Cotterell and Sch¨utze (2015) words, morphological analyses addition Sennrich et al. (2016) morphs (BPE) none Kim et al. (2016) characters CNN Ling et al. (2015) characters bi-LSTM Wieting et al. (2016) character n-grams addition Bojanowski et al. (2016) character n-grams addition Vylomova et al. (2016) characters, morphs (Morfessor) bi-LSTM, CNN Miyamoto and Cho (2016) words, characters bi-LSTM Rei et al. (2016) words, characters bi-LSTM Lee et al. (2016) characters CNN Kann and Sch¨utze (2016) characters, morphological analyses none Heigold et al. (2017) words, characters bi-LSTM, CNN Table 2: Summary of previous work on representing words through compositions of subword units. Unit Output of σ(wants) Morfessor ˆwant, s$ BPE ˆw, ants$ char-trigram ˆwa, wan, ant, nts ts$ character ˆ, w, a, n, t, s, $ analysis want+VB, +3rd, +SG, +Pres Table 3: Input representations for wants. of character n-grams. We then calculate the maxover-time of each feature map. yj = max j f[j] (7) and concatenate them to derive the word representation wt = [y1, . . . , ym], where m is the number of filters applied. Highway layers allow some dimensions of wt to be carried or transformed. Since it can learn character n-grams directly, we only use the CNN with character input. 3.3 Language Model We use language models (LM) because they are simple and fundamental to many NLP applications. Given a sequence of text s = w1, . . . , wT , our LM computes the probability of s as: P(w1, . . . , wT ) = T Y t=1 P(yt|w1, . . . , wt−1) (8) Figure 1: Our LSTM-LM architecture. where yt = wt if wt is in the output vocabulary and yt = UNK otherwise. Our language model is an LSTM variant of recurrent neural network language (RNN) LM (Mikolov et al., 2010). At time step t, it receives input wt and predicts yt+1. Using Eq. 1, it first computes representation wt of wt. Given this representation and previous state ht−1, it produces a new state ht and predicts yt+1: ht = LSTM(wt, ht−1) (9) ˆyt+1 = g(VT · ht) (10) where g is a softmax function over the vocabulary yielding the probability in Equation 8. Note that this design means that we can predict only words 2019 Typology Languages #tokens #types Fusional Czech 1.2M 125.4K English 1.2M 81.1K Russian 0.8M 103.5K Agglutinative Finnish 1.2M 188.4K Japanese 1.2M 59.2K Turkish 0.6M 126.2K Root&Pattern Arabic 1.4M 137.5K Hebrew 1.1M 104.9K Reduplication Indonesian 1.2M 76.5K Malaysian 1.2M 77.7K Table 4: Statistics of our datasets. from a finite output vocabulary, so our models differ only in their representation of context words. This design makes it possible to compare language models using perplexity, since they have the same event space, though open vocabulary word prediction is an interesting direction for future work. The complete architecture of our system is shown in Figure 1, showing segmentation function σ and composition function f from Equation 1. 4 Experiments We perform experiments on ten languages (Table 4). We use datasets from Ling et al. (2015) for English and Turkish. For Czech and Russian we use Universal Dependencies (UD) v1.3 (Nivre et al., 2015). For other languages, we use preprocessed Wikipedia data (Al-Rfou et al., 2013).2 For each dataset, we use approximately 1.2M tokens to train, and approximately 150K tokens each for development and testing. Preprocessing involves lowercasing (except for character models) and removing hyperlinks. To ensure that we compared models and not implementations, we reimplemented all models in a single framework using Tensorflow (Abadi et al., 2015).3 We use a common setup for all experiments based on that of Ling et al. (2015), Kim et al. (2016), and Miyamoto and Cho (2016). In preliminary experiments, we confirmed that our models produced similar patterns of perplexities for the reimplemented word and character LSTM 2The Arabic and Hebrew dataset are unvocalized. Japanese mixes Kanji, Katakana, Hiragana, and Latin characters (for foreign words). Hence, a Japanese character can correspond to a character, syllable, or word. The preprocessed dataset is already word-segmented. 3Our implementation of these models can be found at https://github.com/claravania/subword-lstm-lm models of Ling et al. (2015). Even following detailed discussion with Ling (p.c.), we were unable to reproduce their perplexities exactly—our English reimplementation gives lower perplexities; our Turkish higher—but we do reproduce their general result that character bi-LSTMs outperform word models. We suspect that different preprocessing and the stochastic learning explains differences in perplexities. Our final model with biLSTMs composition follows Miyamoto and Cho (2016) as it gives us the same perplexity results for our preliminary experiments on the Penn Treebank dataset (Marcus et al., 1993), preprocessed by Mikolov et al. (2010). 4.1 Training and Evaluation Our LSTM-LM uses two hidden layers with 200 hidden units and representation vectors for words, characters, and morphs all have dimension 200. All parameters are initialized uniformly at random from -0.1 to 0.1, trained by stochastic gradient descent with mini-batch size of 32, time steps of 20, for 50 epochs. To avoid overfitting, we apply dropout with probability 0.5 on the input-tohidden layer and all of the LSTM cells (including those in the bi-LSTM, if used). For all models which do not use bi-LSTM composition, we start with a learning rate of 1.0 and decrease it by half if the validation perplexity does not decrease by 0.1 after 3 epochs. For models with bi-LSTMs composition, we use a constant learning rate of 0.2 and stop training when validation perplexity does not improve after 3 epochs. For the character CNN model, we use the same settings as the small model of Kim et al. (2016). To make our results comparable to Ling et al. (2015), for each language we limit the output vocabulary to the most frequent 5,000 training words plus an unknown word token. To learn to predict unknown words, we follow Ling et al. (2015): in training, words that occur only once are stochastically replaced with the unknown token with probability 0.5. To evaluate the models, we compute perplexity on the test data. 5 Results and Analysis Table 5 presents our main results. In six of ten languages, character-trigram representations composed with bi-LSTMs achieve the lowest perplexities. As far as we know, this particular model has not been tested before, though it is similar 2020 Language word character char trigrams BPE Morfessor %imp bi-lstm CNN add bi-lstm add bi-lstm add bi-lstm Czech 41.46 34.25 36.60 42.73 33.59 49.96 33.74 47.74 36.87 18.98 English 46.40 43.53 44.67 45.41 42.97 47.51 43.30 49.72 49.72 7.39 Russian 34.93 28.44 29.47 35.15 27.72 40.10 28.52 39.60 31.31 20.64 Finnish 24.21 20.05 20.29 24.89 18.62 26.77 19.08 27.79 22.45 23.09 Japanese 98.14 98.14 91.63 101.99 101.09 126.53 96.80 111.97 99.23 6.63 Turkish 66.97 54.46 55.07 50.07 54.23 59.49 57.32 62.20 62.70 25.24 Arabic 48.20 42.02 43.17 50.85 39.87 50.85 42.79 52.88 45.46 17.28 Hebrew 38.23 31.63 33.19 39.67 30.40 44.15 32.91 44.94 34.28 20.48 Indonesian 46.07 45.47 46.60 58.51 45.96 59.17 43.37 59.33 44.86 5.86 Malay 54.67 53.01 50.56 68.51 50.74 68.99 51.21 68.20 52.50 7.52 Table 5: Language model perplexities on test. The best model for each language is highlighted in bold and the improvement of this model over the word-level model is shown in the final column. to (but more general than) the model of Sperr et al. (2013). We can see that the performance of character, character trigrams, and BPE are very competitive. Composition by bi-LSTMs or CNN is more effective than addition, except for Turkish. We also observe that BPE always outperforms Morfessor, even for the agglutinative languages. We now turn to a more detailed analysis by morphological typology. Fusional languages. For these languages, character trigrams composed with bi-LSTMs outperformed all other models, particularly for Czech and Russian (up to 20%), which is unsurprising since both are morphologically richer than English. Agglutinative languages. We observe different results for each language. For Finnish, character trigrams composed with bi-LSTMs achieves the best perplexity. Surprisingly, for Turkish character trigrams composed via addition is best and addition also performs quite well for other representations, potentially useful since the addition function is simpler and faster than bi-LSTMs. We suspect that this is due to the fact that Turkish morphemes are reasonably short, hence wellapproximated by character trigrams. For Japanese, we improvements from character models are more modest than in other languages. Root and Pattern. For these languages, character trigrams composed with bi-LSTMs also achieve the best perplexity. We had wondered whether CNNs would be more effective for root-and-pattern morphology, but since these data are unvocalized, it is more likely that nonconcatenative effects are minimized, though we do still find morphological variants with consonantal inflections that behave more like concatenation. For example, maktab (root:ktb) is written as mktb. We suspect this makes character trigrams quite effective since they match the tri-consonantal root patterns among words which share the same root. Reduplication. For Indonesian, BPE morphs composed with bi-LSTMs model obtain the best perplexity. For Malay, the character CNN outperforms other models. However, these improvements are small compared to other languages. This likely reflects that Indonesian and Malay are only moderately inflected, where inflection involves both concatenative and non-concatenative processes. 5.1 Effects of Morphological Analysis In the experiments above, we used unsupervised morphological segmentation as a proxy for morphological analysis (Table 3). However, as discussed in Section 2, this is quite approximate, so it is natural to wonder what would happen if we had the true morphological analysis. If characterlevel models are powerful enough to capture the effects of morphology, then they should have the predictive accuracy of a model with access to this analysis. To find out, we conducted an oracle experiment using the human-annotated morphological analyses provided in the UD datasets for Czech and Russian, the only languages in our set for which these analyses were available. In these experiments we treat the lemma and each morphological feature as a subword unit. The results (Table 6) show that bi-LSTM composition of these representations outperforms all 2021 Languages Addition bi-LSTM Czech 51.8 30.07 Russian 41.82 26.44 Table 6: Perplexity results using hand-annotated morphological analyses (cf. Table 5). other models for both languages. These results demonstrate that neither character representations nor unsupervised segmentation is a perfect replacement for manual morphological analysis, at least in terms of predictive accuracy. In light of character-level results, they imply that current unsupervised morphological analyzers are poor substitutes for real morphological analysis. However, we can obtain much more unannotated than annotated data, and we might guess that the character-level models would outperform those based on morphological analyses if trained on larger data. To test this, we ran experiments that varied the training data size on three representation models: word, character-trigram bi-LSTM, and character CNN. Since we want to see how much training data is needed to reach perplexity obtained using annotated data, we use the same output vocabulary derived from the original training. While this makes it possible to compare perplexities across models, it is unfavorable to the models trained on larger data, which may focus on other words. This is a limitation of our experimental setup, but does allow us to draw some tentative conclusions. As shown in Table 7, a characterlevel model trained on an order of magnitude more data still does not match the predictive accuracy of a model with access to morphological analysis. 5.2 Automatic Morphological Analysis The oracle experiments show promising results if we have annotated data. But these annotations are expensive, so we also investigated the use of automatic morphological analysis. We obtained analyses for Arabic with the MADAMIRA (Pasha et al., 2014).4 As in the experiment using annotations, we treated each morphological feature as a subword unit. The resulting perplexities of 71.94 and 42.85 for addition and bi-LSTMs, respectively, are worse than those obtained with character trigrams (39.87), though it approaches the best perplexities. 4We only experimented with Arabic since MADAMIRA disambiguates words in contexts; most other analyzers we found did not do this, and would require additional work to add disambiguation. #tokens word char trigram char bi-LSTM CNN 1M 39.69 32.34 35.15 2M 37.59 36.44 35.58 3M 36.71 35.60 35.75 4M 35.89 32.68 35.93 5M 35.20 34.80 37.02 10M 35.60 35.82 39.09 Table 7: Perplexity results on the Czech development data, varying training data size. Perplexity using ~1M tokens annotated data is 28.83. 5.3 Targeted Perplexity Results A difficulty in interpreting the results of Table 5 with respect to specific morphological processes is that perplexity is measured for all words. But these processes do not apply to all words, so it may be that the effects of specific morphological processes are washed out. To get a clearer picture, we measured perplexity for only specific subsets of words in our test data: specifically, given target word wi, we measure perplexity of word wi+1. In other words, we analyze the perplexities when the inflected words of interest are in the most recent history, exploiting the recency bias of our LSTM-LM. This is the perplexity most likely to be strongly affected by different representations, since we do not vary representations of the predicted word itself. We look at several cases: nouns and verbs in Czech and Russian, where word classes can be identified from annotations, and reduplication in Indonesian, which we can identify mostly automatically. For each analysis, we also distinguish between frequent cases, where the inflected word occurs more than ten times in the training data, and rare cases, where it occurs fewer than ten times. We compare only bi-LSTM models. For Czech and Russian, we again use the UD annotation to identify words of interest. The results (Table 8), show that manual morphological analysis uniformly outperforms other subword models, with an especially strong effect for Czech nouns, suggesting that other models do not capture useful predictive properties of a morphological analysis. We do however note that character trigrams achieve low perplexities in most cases, similar to overall results (Table 5). We also observe that the subword models are more effective for rare words. 2022 Inflection Model all frequent rare Czech word 61.21 56.84 72.96 nouns characters 51.01 47.94 59.01 char-trigrams 50.34 48.05 56.13 BPE 53.38 49.96 62.81 morph. analysis 40.86 40.08 42.64 Czech word 81.37 74.29 99.40 verbs characters 70.75 68.07 77.11 char-trigrams 65.77 63.71 70.58 BPE 74.18 72.45 78.25 morph. analysis 59.48 58.56 61.78 Russian word 45.11 41.88 48.26 nouns characters 37.90 37.52 38.25 char-trigrams 36.32 34.19 38.40 BPE 43.57 43.67 43.47 morph. analysis 31.38 31.30 31.50 Russian word 56.45 47.65 69.46 verbs characters 45.00 40.86 50.60 char-trigrams 42.55 39.05 47.17 BPE 54.58 47.81 64.12 morph. analysis 41.31 39.8 43.18 Table 8: Average perplexities of words that occur after nouns and verbs. Frequent words occur more than ten times in the training data; rare words occur fewer times than this. The best perplexity is in bold while the second best is underlined. For Indonesian, we exploit the fact that the hyphen symbol ‘-’ typically separates the first and second occurrence of a reduplicated morpheme, as in the examples of Section 2. We use the presence of word tokens containing hyphens to estimate the percentage of those exhibiting reduplication. As shown in Table 9, the numbers are quite low. Table 10 shows results for reduplication. In contrast with the overall results, the BPE bi-LSTM model has the worst perplexities, while character bi-LSTM has the best, suggesting that these models are more effective for reduplication. Looking more closely at BPE segmentation of reduplicated words, we found that only 6 of 252 reduplicated words have a correct word segmentation, with the reduplicated morpheme often combining differently with the notional start-of-word or hyphen character. One the other hand BPE correctly learns 8 out of 9 Indonesian prefixes and 4 out of 7 Indonesian suffixes.5 This analysis supports our intuition that the improvement from BPE might come from its modeling of concatenative morphology. 5.4 Qualitative Analysis Table 11 presents nearest neighbors under cosine similarity for in-vocabulary, rare, and out-of5We use Indonesian affixes listed in Larasati et al. (2011) Language type-level (%) token-level (%) Indonesian 1.10 2.60 Malay 1.29 2.89 Table 9: Percentage of full reduplication on the type and token level. Model all frequent rare word 101.71 91.71 156.98 characters 99.21 91.35 137.42 BPE 117.2 108.86 156.81 Table 10: Average perplexities of words that occur after reduplicated words in the test set. vocabulary (OOV) words.6 For frequent words, standard word embeddings are clearly superior for lexical meaning. Character and morph representations tend to find words that are orthographically similar, suggesting that they are better at modeling dependent than root morphemes. The same pattern holds for rare and OOV words. We suspect that the subword models outperform words on language modeling because they exploit affixes to signal word class. We also noticed similar patterns in Japanese. We analyze reduplication by querying reduplicated words to find their nearest neighbors using the BPE bi-LSTM model. If the model were sensitive to reduplication, we would expect to see morphological variants of the query word among its nearest neighbors. However, from Table 12, this is not so. With the partially reduplicated query berlembah-lembah, we do not find the lemma lembah. 6 Conclusion We presented a systematic comparison of word representation models with different levels of morphological awareness, across languages with different morphological typologies. Our results confirm previous findings that character-level models are effective for many languages, but these models do not match the predictive accuracy of model with explicit knowledge of morphology, even after we increase the training data size by ten times. Moreover, our qualitative analysis suggests that they learn orthographic similarity of affixes, and lose the meaning of root morphemes. Although morphological analyses are available 6https://radimrehurek.com/gensim/ 2023 Model Frequent Words Rare Words OOV words man including relatively unconditional hydroplane uploading foodism word person like extremely nazi molybdenum anyone featuring making fairly your children include very joints imperial men includes quite supreme intervene BPE ii called newly unintentional emphasize upbeat vigilantism LSTM hill involve never ungenerous heartbeat uprising pyrethrum text like essentially unanimous hybridized handling pausanias netherlands creating least unpalatable unplatable hand-colored footway charmak include resolutely unconstitutional selenocysteine drifted tuaregs trigrams vill includes regeneratively constitutional guerrillas affected quft LSTM cow undermining reproductively unimolecular scrofula conflicted subjectivism maga under commonly medicinal seleucia convicted tune-up charmayr inclusion relates undamaged hydrolyzed musagte formulas LSTM many insularity replicate unmyelinated hydraulics mutualism formally mary includes relativity unconditionally hysterotomy mutualists fecal may include gravestones uncoordinated hydraulic meursault foreland charmtn include legislatively unconventional hydroxyproline unloading fordism CNN mann includes lovely unintentional hydrate loading dadaism jan excluding creatively unconstitutional hydrangea upgrading popism nun included negatively untraditional hyena upholding endemism Table 11: Nearest neighbours of semantically and syntactically similar words. Query Top nearest neighbours kota-kota wilayah-wilayah (areas), pulau-pulau (islands), negara-negara (countries), (cities) bahasa-bahasa (languages), koloni-koloni (colonies) berlembah-lembah berargumentasi (argue), bercakap-cakap (converse), berkemauan (will), (have many valleys) berimplikasi (imply), berketebalan (have a thickness) Table 12: Nearest neighbours of Indonesian reduplicated words in the BPE bi-LSTM model. in limited quantities, our results suggest that there might be utility in semi-supervised learning from partially annotated data. Across languages with different typologies, our experiments show that the subword unit models are most effective on agglutinative languages. However, these results do not generalize to all languages, since factors such as morphology and orthography affect the utility of these representations. We plan to explore these effects in future work. Acknowledgments Clara Vania is supported by the Indonesian Endowment Fund for Education (LPDP), the Centre for Doctoral Training in Data Science, funded by the UK EPSRC (grant EP/L016427/1), and the University of Edinburgh. We thank Sameer Bansal, Toms Bergmanis, Marco Damonte, Federico Fancellu, Sorcha Gilroy, Sharon Goldwater, Frank Keller, Mirella Lapata, Felicia Liu, Jonathan Mallinson, Joana Ribeiro, Naomi Saphra, Ida Szubert, and the anonymous reviewers for helpful discussion of this work and comments on previous drafts of the paper. References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. http://tensorflow.org/. Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, Sofia, Bulgaria, pages 183–192. http://www.aclweb.org/anthology/W13-3520. Emily M. Bender. 2013. Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax. Morgan & Claypool Publishers. 2024 Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. CoRR abs/1607.04606. http://arxiv.org/abs/1607.04606. Jan A. Botha and Phil Blunsom. 2014. Compositional Morphology for Word Representations and Language Modeling. In Proceedings of the 31st International Conference on Machine Learning (ICML). Beijing, China. http://jmlr.org/proceedings/papers/v32/botha14.pdf. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 740–750. http://www.aclweb.org/anthology/D14-1082. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1724–1734. http://www.aclweb.org/anthology/D141179. Ryan Cotterell and Hinrich Sch¨utze. 2015. Morphological word-embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 1287–1292. http://www.aclweb.org/anthology/N151140. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 334–343. http://www.aclweb.org/anthology/P15-1033. Philip Gage. 1994. A new algorithm for data compression. C Users J. 12(2):23–38. http://dl.acm.org/citation.cfm?id=177910.177914. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 1296– 1306. http://www.aclweb.org/anthology/N16-1155. Alex Graves, Santiago Fern´andez, and J¨urgen Schmidhuber. 2005. Bidirectional lstm networks for improved phoneme classification and recognition. In Proceedings of the 15th International Conference on Artificial Neural Networks: Formal Models and Their Applications Volume Part II. Springer-Verlag, Berlin, Heidelberg, ICANN’05, pages 799–804. http://dl.acm.org/citation.cfm?id=1986079.1986220. Martin Haspelmath. 2010. Understanding Morphology. Understanding Language Series. Arnold, London, second edition. Georg Heigold, Guenter Neumann, and Josef van Genabith. 2017. An extensive empirical evaluation of character-based morphological tagging for 14 languages. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, pages 505– 513. http://aclweb.org/anthology/E17-1048. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735– 1780. https://doi.org/10.1162/neco.1997.9.8.1735. Katharina Kann and Hinrich Sch¨utze. 2016. Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, Association for Computational Linguistics, chapter MED: The LMU System for the SIGMORPHON 2016 Shared Task on Morphological Reinflection, pages 62–70. https://doi.org/10.18653/v1/W16-2010. Yoon Kim, Yacine Jernite, David Sontag, and Alexander Rush. 2016. Character-aware neural language models. In Proceedings of the 2016 Conference on Artificial Intelligence (AAAI). Septina Dian Larasati, Vladislav Kuboˇn, and Daniel Zeman. 2011. Indonesian Morphology Tool (MorphInd): Towards an Indonesian Corpus, Springer Berlin Heidelberg, Berlin, Heidelberg, pages 119– 129. https://doi.org/10.1007/978-3-642-23138-4 8. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2016. Fully character-level neural machine translation without explicit segmentation. CoRR abs/1610.03017. http://arxiv.org/abs/1610.03017. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1520– 1530. http://aclweb.org/anthology/D15-1176. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Association for 2025 Computational Linguistics, Sofia, Bulgaria, pages 104–113. http://www.aclweb.org/anthology/W133512. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Comput. Linguist. 19(2):313–330. http://dl.acm.org/citation.cfm?id=972470.972475. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (INTERSPEECH 2010). International Speech Communication Association, volume 2010, pages 1045–1048. http://www.iscaspeech.org/archive/interspeech 2010/i10 1045.html. Yasumasa Miyamoto and Kyunghyun Cho. 2016. Gated word-character recurrent language model. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1992–1997. https://aclweb.org/anthology/D16-1209. G. David Morley. 2000. Syntax in Functional Grammar: An Introduction to Lexicogrammar in Systemic Linguistics. Continuum. Joakim Nivre, ˇZeljko Agi´c, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Miguel Ballesteros, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Cristina Bosco, Sam Bowman, Giuseppe G. A. Celano, Miriam Connor, Marie-Catherine de Marneffe, Arantza Diaz de Ilarraza, Kaja Dobrovoljc, Timothy Dozat, Tomaˇz Erjavec, Rich´ard Farkas, Jennifer Foster, Daniel Galbraith, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Berta Gonzales, Bruno Guillaume, Jan Hajiˇc, Dag Haug, Radu Ion, Elena Irimia, Anders Johannsen, Hiroshi Kanayama, Jenna Kanerva, Simon Krek, Veronika Laippala, Alessandro Lenci, Nikola Ljubeˇsi´c, Teresa Lynn, Christopher Manning, Ctlina Mrnduc, David Mareˇcek, H´ector Mart´ınez Alonso, Jan Maˇsek, Yuji Matsumoto, Ryan McDonald, Anna Missil¨a, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Shunsuke Mori, Hanna Nurmi, Petya Osenova, Lilja Øvrelid, Elena Pascual, Marco Passarotti, Cenel-Augusto Perez, Slav Petrov, Jussi Piitulainen, Barbara Plank, Martin Popel, Prokopis Prokopidis, Sampo Pyysalo, Loganathan Ramasamy, Rudolf Rosa, Shadi Saleh, Sebastian Schuster, Wolfgang Seeker, Mojgan Seraji, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk´o, Kiril Simov, Aaron Smith, Jan ˇStˇep´anek, Alane Suhr, Zsolt Sz´ant´o, Takaaki Tanaka, Reut Tsarfaty, Sumire Uematsu, Larraitz Uria, Viktor Varga, Veronika Vincze, Zdenˇek ˇZabokrtsk´y, Daniel Zeman, and Hanzhi Zhu. 2015. Universal dependencies 1.2 LINDAT/CLARIN digital library at Institute of Formal and Applied Linguistics, Charles University in Prague. http://hdl.handle.net/11234/11548. Arfath Pasha, Mohamed Al-Badrashiny, Mona Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. Madamira: A fast, comprehensive tool for morphological analysis and disambiguation of arabic. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14). European Language Resources Association (ELRA), Reykjavik, Iceland, pages 1094–1101. ACL Anthology Identifier: L141479. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 412–418. http://anthology.aclweb.org/P16-2067. Siyu Qiu, Qing Cui, Jiang Bian, Bin Gao, and TieYan Liu. 2014. Co-learning of word representations and morpheme representations. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics, Dublin, Ireland, pages 141– 150. http://www.aclweb.org/anthology/C14-1015. Marek Rei, Gamal Crichton, and Sampo Pyysalo. 2016. Attending to characters in neural sequence labeling models. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 309– 318. http://aclweb.org/anthology/C16-1030. Brian Roark and Richard Sproat. 2007. Computational Approach to Morphology and Syntax. Oxford University Press. Cicero Dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for partof-speech tagging. In Eric P. Xing and Tony Jebara, editors, Proceedings of the 31st International Conference on Machine Learning. PMLR, Bejing, China, volume 32 of Proceedings of Machine Learning Research, pages 1818–1826. http://proceedings.mlr.press/v32/santos14.html. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1715–1725. http://www.aclweb.org/anthology/P161162. 2026 Peter Smit, Sami Virpioja, Stig-Arne Gr¨onroos, and Mikko Kurimo. 2014. Morfessor 2.0: Toolkit for statistical morphological segmentation. In Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Gothenburg, Sweden, pages 21– 24. http://www.aclweb.org/anthology/E14-2006. Henning Sperr, Jan Niehues, and Alex Waibel. 2013. Letter n-gram-based input encoding for continuous space language models. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality. Association for Computational Linguistics, Sofia, Bulgaria, pages 30–39. http://www.aclweb.org/anthology/W13-3204. Ekaterina Vylomova, Trevor Cohn, Xuanli He, and Gholamreza Haffari. 2016. Word representation models for morphologically rich languages in neural machine translation. CoRR abs/1606.04217. http://arxiv.org/abs/1606.04217. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding words and sentences via character n-grams. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1504–1515. https://aclweb.org/anthology/D16-1157. 2027
2017
184
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2028–2036 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1185 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2028–2036 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1185 Riemannian Optimization for Skip-Gram Negative Sampling Alexander Fonarev1,2,4,*, Oleksii Hrinchuk1,2,3,*, Gleb Gusev2,3, Pavel Serdyukov2, and Ivan Oseledets1,5 1Skolkovo Institute of Science and Technology, Moscow, Russia 2Yandex LLC, Moscow, Russia 3Moscow Institute of Physics and Technology, Moscow, Russia 4SBDA Group, Dublin, Ireland 5Institute of Numerical Mathematics, Russian Academy of Sciences, Moscow, Russia Abstract Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in “word2vec” software, is usually optimized by stochastic gradient descent. However, the optimization of SGNS objective can be viewed as a problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix. 1 Introduction In this paper, we consider the problem of embedding words into a low-dimensional space in order to measure the semantic similarity between them. As an example, how to find whether the word “table” is semantically more similar to the word “stool” than to the word “sky”? That is achieved by constructing a low-dimensional vector representation for each word and measuring similarity between the words as the similarity between the corresponding vectors. One of the most popular word embedding models (Mikolov et al., 2013) is a discriminative neural network that optimizes Skip-Gram Negative Sampling (SGNS) objective (see Equation 3). It aims at predicting whether two words can be found close to each other within a text. As shown in Section 2, the process of word embeddings training ∗The first two authors contributed equally to this work using SGNS can be divided into two general steps with clear objectives: Step 1. Search for a low-rank matrix X that provides a good SGNS objective value; Step 2. Search for a good low-rank representation X = WC⊤in terms of linguistic metrics, where W is a matrix of word embeddings and C is a matrix of so-called context embeddings. Unfortunately, most previous approaches mixed these two steps into a single one, what entails a not completely correct formulation of the optimization problem. For example, popular approaches to train embeddings (including the original “word2vec” implementation) do not take into account that the objective from Step 1 depends only on the product X = WC⊤: instead of straightforward computing of the derivative w.r.t. X, these methods are explicitly based on the derivatives w.r.t. W and C, what complicates the optimization procedure. Moreover, such approaches do not take into account that parametrization WC⊤of matrix X is non-unique and Step 2 is required. Indeed, for any invertible matrix S, we have X = W1C⊤ 1 = W1SS−1C⊤ 1 = W2C⊤ 2 , therefore, solutions W1C⊤ 1 and W2C⊤ 2 are equally good in terms of the SGNS objective but entail different cosine similarities between embeddings and, as a result, different performance in terms of linguistic metrics (see Section 4.2 for details). A successful attempt to follow the above described steps, which outperforms the original SGNS optimization approach in terms of various linguistic tasks, was proposed in (Levy and Goldberg, 2014). In order to obtain a low-rank matrix X on Step 1, the method reduces the dimensionality of Shifted Positive Pointwise Mutual Informa2028 tion (SPPMI) matrix via Singular Value Decomposition (SVD). On Step 2, it computes embeddings W and C via a simple formula that depends on the factors obtained by SVD. However, this method has one important limitation: SVD provides a solution to a surrogate optimization problem, which has no direct relation to the SGNS objective. In fact, SVD minimizes the Mean Squared Error (MSE) between X and SPPMI matrix, what does not lead to minimization of SGNS objective in general (see Section 6.1 and Section 4.2 in (Levy and Goldberg, 2014) for details). These issues bring us to the main idea of our paper: while keeping the low-rank matrix search setup on Step 1, optimize the original SGNS objective directly. This leads to an optimization problem over matrix X with the lowrank constraint, which is often (Mishra et al., 2014) solved by applying Riemannian optimization framework (Udriste, 1994). In our paper, we use the projector-splitting algorithm (Lubich and Oseledets, 2014), which is easy to implement and has low computational complexity. Of course, Step 2 may be improved as well, but we regard this as a direction of future work. As a result, our approach achieves the significant improvement in terms of SGNS optimization on Step 1 and, moreover, the improvement on Step 1 entails the improvement on Step 2 in terms of linguistic metrics. That is why, the proposed two-step decomposition of the problem makes sense, what, most importantly, opens the way to applying even more advanced approaches based on it (e.g., more advanced Riemannian optimization techniques for Step 1 or a more sophisticated treatment of Step 2). To summarize, the main contributions of our paper are: • We reformulated the problem of SGNS word embedding learning as a two-step procedure with clear objectives; • For Step 1, we developed an algorithm based on Riemannian optimization framework that optimizes SGNS objective over low-rank matrix X directly; • Our algorithm outperforms state-of-the-art competitors in terms of SGNS objective and the semantic similarity linguistic metric (Levy and Goldberg, 2014; Mikolov et al., 2013; Schnabel et al., 2015). 2 Problem Setting 2.1 Skip-Gram Negative Sampling In this paper, we consider the Skip-Gram Negative Sampling (SGNS) word embedding model (Mikolov et al., 2013), which is a probabilistic discriminative model. Assume we have a text corpus given as a sequence of words w1, . . . , wn, where n may be larger than 1012 and wi ∈VW belongs to a vocabulary of words VW . A context c ∈VC of the word wi is a word from set {wi−L, ..., wi−1, wi+1, ..., wi+L} for some fixed window size L. Let w, c ∈Rd be the word embeddings of word w and context c, respectively. Assume they are specified by the following mappings: W : VW →Rd, C : VC →Rd. The ultimate goal of SGNS word embedding training is to fit good mappings W and C. Let D be a multiset of all word-context pairs observed in the corpus. In the SGNS model, the probability that word-context pair (w, c) is observed in the corpus is modeled as a following dsitribution: P (#(w, c) ̸= 0|w, c) = = σ(⟨w, c⟩) = 1 1 + exp(−⟨w, c⟩), (1) where #(w, c) is the number of times the pair (w, c) appears in D and ⟨x, y⟩is the scalar product of vectors x and y. Number d is a hyperparameter that adjusts the flexibility of the model. It usually takes values from tens to hundreds. In order to collect a training set, we take all pairs (w, c) from D as positive examples and k randomly generated pairs (w, c) as negative ones. The number of times the word w and the context c appear in D can be computed as #(w) = X c∈Vc #(w, c), #(c) = X w∈Vw #(w, c) accordingly. Then negative examples are generated from the distribution defined by #(c) counters: PD(c) = #(c) |D| . 2029 In this way, we have a model maximizing the following logarithmic likelihood objective for all word-context pairs (w, c): lwc = #(w, c)(log σ(⟨w, c⟩)+ +k · Ec′∼PD log σ(−⟨w, c′⟩)). (2) In order to maximize the objective over all observations for each pair (w, c), we arrive at the following SGNS optimization problem over all possible mappings W and C: l = X w∈VW X c∈VC (#(w, c)(log σ(⟨w, c⟩)+ +k · Ec′∼PD log σ(−⟨w, c′⟩))) →max W,C . (3) Usually, this optimization is done via the stochastic gradient descent procedure that is performed during passing through the corpus (Mikolov et al., 2013; Rong, 2014). 2.2 Optimization over Low-Rank Matrices Relying on the prospect proposed in (Levy and Goldberg, 2014), let us show that the optimization problem given by (3) can be considered as a problem of searching for a matrix that maximizes a certain objective function and has the rank-d constraint (Step 1 in the scheme described in Section 1). 2.2.1 SGNS Loss Function As shown in (Levy and Goldberg, 2014), the logarithmic likelihood (3) can be represented as the sum of lw,c(w, c) over all pairs (w, c), where lw,c(w, c) has the following form: lw,c(w, c) = #(w, c) log σ(⟨w, c⟩)+ +k#(w)#(c) |D| log σ(−⟨w, c⟩). (4) A crucial observation is that this loss function depends only on the scalar product ⟨w, c⟩but not on embeddings w and c separately: lw,c(w, c) = fw,c(xw,c), where fw,c(xw,c) = aw,c log σ(xw,c)+bw,c log σ(−xw,c), and xw,c is the scalar product ⟨w, c⟩, and aw,c = #(w, c), bw,c = k#(w)#(c) |D| are constants. 2.2.2 Matrix Notation Denote |VW | as n and |VC| as m. Let W ∈Rn×d and C ∈Rm×d be matrices, where each row w ∈ Rd of matrix W is the word embedding of the corresponding word w and each row c ∈Rd of matrix C is the context embedding of the corresponding context c. Then the elements of the product of these matrices X = WC⊤ are the scalar products xw,c of all pairs (w, c): X = (xw,c), w ∈VW , c ∈VC. Note that this matrix has rank d, because X equals to the product of two matrices with sizes (n × d) and (d × m). Now we can write SGNS objective given by (3) as a function of X: F(X) = X w∈VW X c∈VC fw,c(xw,c), F : Rn×m →R. (5) This arrives us at the following proposition: Proposition 1 SGNS optimization problem given by (3) can be rewritten in the following constrained form: maximize X∈Rn×m F(X), subject to X ∈Md, (6) where Md is the manifold (Udriste, 1994) of all matrices in Rn×m with rank d: Md = {X ∈Rn×m : rank(X) = d}. The key idea of this paper is to solve the optimization problem given by (6) via the framework of Riemannian optimization, which we introduce in Section 3. Important to note that this prospect does not suppose the optimization over parameters W and C directly. This entails the optimization in the space with ((n + m −d) · d) degrees of freedom (Mukherjee et al., 2015) instead of ((n+m)· d), what simplifies the optimization process (see Section 5 for the experimental results). 2.3 Computing Embeddings from a Low-Rank Solution Once X is found, we need to recover W and C such that X = WC⊤(Step 2 in the scheme described in Section 1). This problem does not 2030 have a unique solution, since if (W, C) satisfy this equation, then WS−1 and CS⊤satisfy it as well for any non-singular matrix S. Moreover, different solutions may achieve different values of the linguistic metrics (see Section 4.2 for details). While our paper focuses on Step 1, we use, for Step 2, a heuristic approach that was proposed in (Levy et al., 2015) and it shows good results in practice. We compute SVD of X in the form X = UΣV ⊤, where U and V have orthonormal columns, and Σ is the diagonal matrix, and use W = U √ Σ, C = V √ Σ as matrices of embeddings. A simple justification of this solution is the following: we need to map words into vectors in a way that similar words would have similar embeddings in terms of cosine similarities: cos(w1, w2) = ⟨w1, w2⟩ ∥w1∥· ∥w2∥. It is reasonable to assume that two words are similar, if they share contexts. Therefore, we can estimate the similarity of two words w1, w2 as s(w1, w2) = X c∈VC xw1,c · xw2,c, what is the element of the matrix XX⊤with indices (w1, w2). Note that XX⊤= UΣV ⊤V ΣU ⊤= UΣ2U ⊤. If we choose W = UΣ, we exactly obtain ⟨w1, w2⟩= s(w1, w2), since WW ⊤ = XX⊤in this case. That is, the cosine similarity of the embeddings w1, w2 coincides with the intuitive similarity s(w1, w2). However, scaling by √ Σ instead of Σ was shown in (Levy et al., 2015) to be a better solution in experiments. 3 Proposed Method 3.1 Riemannian Optimization 3.1.1 General Scheme The main idea of Riemannian optimization (Udriste, 1994) is to consider (6) as a constrained optimization problem. Assume we have an approximated solution Xi on a current step of the optimization process, where i is the step number. In order to improve Xi, the next step of the standard gradient ascent outputs the point Xi + ∇F(Xi), where ∇F(Xi) is the gradient of objective F at the point Xi. Note that the gradient ∇F(Xi) can be naturally considered as a matrix in Rn×m. Point Xi + ∇F(Xi) leaves the manifold Md, because its rank is generally greater than d. That is why Riemannian optimization methods map point Xi + ∇F(Xi) back to manifold Md. The standard Riemannian gradient method first projects the gradient step onto the tangent space at the current point Xi and then retracts it back to the manifold: Xi+1 = R (PTM (Xi + ∇F(Xi))), where R is the retraction operator, and PTM is the projection onto the tangent space. Although the optimization problem is nonconvex, Riemannian optimization methods show good performance on it. Theoretical properties and convergence guarantees of such methods are discussed in (Wei et al., 2016) more thoroughly. 3.1.2 Projector-Splitting Algorithm In our paper, we use a simplified version of such approach that retracts point Xi +∇F(Xi) directly to the manifold and does not require projection onto the tangent space PTM as illustrated in Figure 1: Xi+1 = R(Xi + ∇F(Xi)). Intuitively, retractor R finds a rank-d matrix on the manifold Md that is similar to high-rank matrix Xi + ∇F(Xi) in terms of Frobenius norm. How can we do it? The most straightforward way to reduce the rank of Xi + ∇F(Xi) is to perform the SVD, which keeps d largest singular values of it: 1: Ui+1, Si+1, V ⊤ i+1 ←SVD(Xi + ∇F(Xi)), 2: Xi+1 ←Ui+1Si+1V ⊤ i+1. (7) However, it is computationally expensive. Instead of this approach, we use the projectorsplitting method (Lubich and Oseledets, 2014), which is a second-order retraction onto the manifold (for details, see the review (Absil and Oseledets, 2015)). Its practical implementation is 2031 3. RELATED WORK Mikolov main [?] Levi main [?] rFi Xi = UiSiV T i Xi+ 1 = Ui+ 1Si+ 1V T i+ 1 retraction 4. CONCLU SIONS Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WOOD STOCK ’97 El Paso, Texas USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$ 15.00. 2. CONCLU SIONS 3. RELATED WORK Mikolov main [?] Levi main [?] rFi Xi = UiSiV T i Xi+ 1 = Ui+ 1Si+ 1V T i+ 1 retraction Md 4. CONCLU SIONS Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to epublish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WOOD STOCK ’97 El Paso, Texas USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$ 15.00. 1. INTROD U CTION sdfdsf 2. CONCLU SIONS 3. RELATED WORK Mikolov main [?] Levi main [?] rF(Xi) Xi + rF(Xi) Xi = UiSiV T i Xi Xi+ 1 Xi+ 1 = Ui+ 1Si+ 1V T i+ 1 retraction Md 4. CONCLU SIONS Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WOOD STOCK ’97 El Paso, Texas USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$ 15.00. 3. RELATED WORK Mikolov main [?] Levi main [?] rF(Xi) Xi + rF(Xi) Xi = UiSiV T i Xi Xi+ 1 Xi+ 1 = Ui+ 1Si+ 1V T i+ 1 retraction Md 4. CONCLU SIONS Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WOOD STOCK ’97 El Paso, Texas USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$ 15.00. 1. INTROD U CTION sdfdsf 2. CONCLU SIONS 3. RELATED WORK Mikolov main [?] Levi main [?] rF(Xi) Xi + rF(Xi) Xi = UiSiV T i Xi Xi+ 1 Xi+ 1 = Ui+ 1Si+ 1V T i+ 1 retraction Md 4. CONCLU SIONS Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WOOD STOCK ’97 El Paso, Texas USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$ 15.00. sdfdsf 2. CONCLU SIONS 3. RELATED WORK Mikolov main [?] Levi main [?] rF(Xi) Xi + rF(Xi) Xi = UiSiV T i Xi Xi+ 1 Xi+ 1 = Ui+ 1Si+ 1V T i+ 1 retraction Md 4. CONCLU SIONS Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WOOD STOCK ’97 El Paso, Texas USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$ 15.00. Figure 1: Geometric interpretation of one step of projector-splitting optimization procedure: the gradient step an the retraction of the high-rank matrix Xi+∇F(Xi) to the manifold of low-rank matrices Md. also quite intuitive: instead of computing the full SVD of Xi + ∇F(Xi) according to the gradient projection method, we use just one step of the block power numerical method (Bentbib and Kanber, 2015) which computes the SVD, what reduces the computational complexity. Let us keep the current point in the following factorized form: Xi = UiSiV ⊤ i , (8) where matrices Ui ∈Rn×d and Vi ∈Rm×d have d orthonormal columns and Si ∈Rd×d. Then we need to perform two QR-decompositions to retract point Xi + ∇F(Xi) back to the manifold: 1: Ui+1, Si+1 ←QR ((Xi + ∇F(Xi))Vi) , 2: Vi+1, S⊤ i+1 ←QR  (Xi + ∇F(Xi))⊤Ui+1  , 3: Xi+1 ←Ui+1Si+1V ⊤ i+1. In this way, we always keep the solution Xi+1 = Ui+1Si+1V ⊤ i+1 on the manifold Md and in the form (8). What is important, we only need to compute ∇F(Xi), so the gradients with respect to U, S and V are never computed explicitly, thus avoiding the subtle case where S is close to singular (so-called singular (critical) point on the manifold). Indeed, the gradient with respect to U (while keeping the orthogonality constraints) can be written (Koch and Lubich, 2007) as: ∂F ∂U = ∂F ∂X V S−1, which means that the gradient will be large if S is close to singular. The projector-splitting scheme is free from this problem. 3.2 Algorithm In case of SGNS objective given by (5), an element of gradient ∇F has the form: (∇F(X))w,c = ∂fw,c(xw,c) ∂xw,c = = #(w, c) · σ (−xw,c) −k#(w)#(c) |D| · σ (xw,c) . To make the method more flexible in terms of convergence properties, we additionally use λ ∈R, which is a step size parameter. In this case, retractor R returns Xi + λ∇F(Xi) instead of Xi + ∇F(Xi) onto the manifold. The whole optimization procedure is summarized in Algorithm 1. 4 Experimental Setup 4.1 Training Models We compare our method (“RO-SGNS” in the tables) performance to two baselines: SGNS embeddings optimized via Stochastic Gradient Descent, implemented in the original “word2vec”, (“SGDSGNS” in the tables) (Mikolov et al., 2013) and embeddings obtained by SVD over SPPMI matrix (“SVD-SPPMI” in the tables) (Levy and Goldberg, 2014). We have also experimented with the blockwise alternating optimization over factors W and C, but the results are almost the same to SGD results, that is why we do not to include them into the paper. The source code of our experiments is available online1. The models were trained on English Wikipedia “enwik9” corpus2, which was previously used in most papers on this topic. Like in previous studies, we counted only the words which occur more than 200 times in the training corpus (Levy and Goldberg, 2014; Mikolov et al., 2013). As a result, we obtained a vocabulary of 24292 unique tokens (set of words VW and set of contexts VC are equal). The size of the context window was set to 5 for all experiments, as it was done in (Levy and Goldberg, 2014; Mikolov et al., 2013). We conduct three series of experiments: for dimensionality d = 100, d = 200, and d = 500. 1https://github.com/AlexGrinch/ro_sgns 2http://mattmahoney.net/dc/textdata 2032 Algorithm 1 Riemannian Optimization for SGNS Require: Dimentionality d, initialization W0 and C0, step size λ, gradient function ∇F : Rn×m → Rn×m, number of iterations K Ensure: Factor W ∈Rn×d 1: X0 ←W0C⊤ 0 # get an initial point at the manifold 2: U0, S0, V ⊤ 0 ←SVD(X0) # compute the first point satisfying the low-rank constraint 3: for i ←1, . . . , K do 4: Ui, Si ←QR ((Xi−1 + λ∇F(Xi−1))Vi−1) # perform one step of the block power method 5: Vi, S⊤ i ←QR (Xi−1 + λ∇F(Xi−1))⊤Ui  6: Xi ←UiSiV ⊤ i # update the point at the manifold 7: end for 8: U, Σ, V ⊤←SVD(XK) 9: W ←U √ Σ # compute word embeddings 10: return W Optimization step size is chosen to be small enough to avoid huge gradient values. However, thorough choice of λ does not result in a significant difference in performance (this parameter was tuned on the training data only, the exact values used in experiments are reported below). 4.2 Evaluation We evaluate word embeddings via the word similarity task. We use the following popular datasets for this purpose: “wordsim-353” ((Finkelstein et al., 2001); 3 datasets), “simlex-999” (Hill et al., 2016) and “men” (Bruni et al., 2014). Original “wordsim-353” dataset is a mixture of the word pairs for both word similarity and word relatedness tasks. This dataset was split (Agirre et al., 2009) into two intersecting parts: “wordsim-sim” (“ws-sim” in the tables) and “wordsim-rel” (“wsrel” in the tables) to separate the words from different tasks. In our experiments, we use both of them on a par with the full version of “wordsim353” (“ws-full” in the tables). Each dataset contains word pairs together with assessor-assigned similarity scores for each pair. As a quality measure, we use Spearman’s correlation between these human ratings and cosine similarities for each pair. We call this quality metric linguistic in our paper. 5 Results of Experiments First of all, we compare the value of SGNS objective obtained by the methods. The comparison is demonstrated in Table 1. We see that SGD-SGNS and SVD-SPPMI methods provide quite similar results, however, the proposed method obtains significantly better d = 100 d = 200 d = 500 SGD-SGNS −1.68 −1.67 −1.63 SVD-SPPMI −1.65 −1.65 −1.62 RO-SGNS −1.44 −1.43 −1.41 Table 1: Comparison of SGNS values (multiplied by 10−9) obtained by the models. Larger is better. SGNS values, what proves the feasibility of using Riemannian optimization framework in SGNS optimization problem. It is interesting to note that SVD-SPPMI method, which does not optimize SGNS objective directly, obtains better results than SGD-SGNS method, which aims at optimizing SGNS. This fact additionally confirms the idea described in Section 2.2.2 that the independent optimization over parameters W and C may decrease the performance. However, the target performance measure of embedding models is the correlation between semantic similarity and human assessment (Section 4.2). Table 2 presents the comparison of the methods in terms of it. We see that our method outperforms the competitors on all datasets except for “men” dataset where it obtains slightly worse results. Moreover, it is important that the higher dimension entails higher performance gain of our method in comparison to the competitors. To understand how our model improves or degrades the performance in comparison to the baseline, we found several words, whose neighbors in terms of cosine distance change significantly. Table 3 demonstrates neighbors of the words “five”, “he” and “main” for both SVD-SPPMI and ROSGNS models. A neighbor is marked bold if we suppose that it has similar semantic meaning to the 2033 Dim. d Algorithm ws-sim ws-rel ws-full simlex men d = 100 SGD-SGNS 0.719 0.570 0.662 0.288 0.645 SVD-SPPMI 0.722 0.585 0.669 0.317 0.686 RO-SGNS 0.729 0.597 0.677 0.322 0.683 d = 200 SGD-SGNS 0.733 0.584 0.677 0.317 0.664 SVD-SPPMI 0.747 0.625 0.694 0.347 0.710 RO-SGNS 0.757 0.647 0.708 0.353 0.701 d = 500 SGD-SGNS 0.738 0.600 0.688 0.350 0.712 SVD-SPPMI 0.765 0.639 0.707 0.380 0.737 RO-SGNS 0.767 0.654 0.715 0.383 0.732 Table 2: Comparison of the methods in terms of the semantic similarity task. Each entry represents the Spearman’s correlation between predicted similarities and the manually assessed ones. five he main SVD-SPPMI RO-SGNS SVD-SPPMI RO-SGNS SVD-SPPMI RO-SGNS Neighbors Dist. Neighbors Dist. Neighbors Dist. Neighbors Dist. Neighbors Dist. Neighbors Dist. lb 0.748 four 0.999 she 0.918 when 0.904 major 0.631 major 0.689 kg 0.731 three 0.999 was 0.797 had 0.903 busiest 0.621 important 0.661 mm 0.670 six 0.997 promptly 0.742 was 0.901 principal 0.607 line 0.631 mk 0.651 seven 0.997 having 0.731 who 0.892 nearest 0.607 external 0.624 lbf 0.650 eight 0.996 dumbledore 0.731 she 0.884 connecting 0.591 principal 0.618 per 0.644 and 0.985 him 0.730 by 0.880 linking 0.588 primary 0.612 Table 3: Examples of the semantic neighbors obtained for words “five”, “he” and “main”. usa SGD-SGNS SVD-SPPMI RO-SGNS Neighbors Dist. Neighbors Dist. Neighbors Dist. akron 0.536 wisconsin 0.700 georgia 0.707 midwest 0.535 delaware 0.693 delaware 0.706 burbank 0.534 ohio 0.691 maryland 0.705 nevada 0.534 northeast 0.690 illinois 0.704 arizona 0.533 cities 0.688 madison 0.703 uk 0.532 southwest 0.684 arkansas 0.699 youngstown 0.532 places 0.684 dakota 0.690 utah 0.530 counties 0.681 tennessee 0.689 milwaukee 0.530 maryland 0.680 northeast 0.687 headquartered 0.527 dakota 0.674 nebraska 0.686 Table 4: Examples of the semantic neighbors from 11th to 20th obtained for the word “usa” by all three methods. Top-10 neighbors for all three methods are exact names of states. source word. First of all, we notice that our model produces much better neighbors of the words describing digits or numbers (see word “five” as an example). Similar situation happens for many other words, e.g. in case of “main” — the nearest neighbors contain 4 similar words for our model instead of 2 in case of SVD-SPPMI. The neighbourhood of “he” contains less semantically similar words in case of our model. However, it filters out irrelevant words, such as “promptly” and “dumbledore”. Table 4 contains the nearest words to the word “usa” from 11th to 20th. We marked names of USA states bold and did not represent top-10 nearest words as they are exactly names of states for all three models. Some non-bold words are arguably relevant as they present large USA cities (“akron”, “burbank”, “madison”) or geographical regions of several states (“midwest”, “northeast”, “southwest”), but there are also some completely irrelevant words (“uk”, “cities”, “places”) presented by first two models. Our experiments show that the optimal number of iterations K in the optimization procedure and step size λ depend on the particular value of d. For d = 100, we have K = 7, λ = 5 · 10−5, for d = 200, we have K = 8, λ = 5 · 10−5, and for d = 500, we have K = 2, λ = 10−4. Moreover, the best results were obtained when SVD-SPPMI embeddings were used as an initialization of Riemannian optimization process. Figure 2 illustrates how the correlation between semantic similarity and human assessment scores changes through iterations of our method. Optimal value of K is the same for both whole testing set and its 10-fold subsets chosen for cross-validation. The idea to stop optimization procedure on some iteration is also discussed in (Lai et al., 2015). Training of the same dimensional models (d = 500) on English Wikipedia corpus using SGD-SGNS, SVD-SPPMI, RO-SGNS took 20 minutes, 10 minutes and 70 minutes respectively. Our method works slower, but not significantly. Moreover, since we were not focused on the code efficiency optimization, this time can be reduced. 2034 0 5 10 15 20 25 iterations 0.692 0.694 0.696 0.698 0.700 0.702 0.704 0.706 0.708 wordsim-353 0 5 10 15 20 25 iterations 0.346 0.347 0.348 0.349 0.350 0.351 0.352 0.353 0.354 simlex-999 0 5 10 15 20 25 iterations 0.696 0.698 0.700 0.702 0.704 0.706 0.708 0.710 men Figure 2: Illustration of why it is important to choose the optimal iteration and stop optimization procedure after it. The graphs show semantic similarity metric in dependence on the iteration of optimization procedure. The embeddings obtained by SVD-SPPMI method were used as initialization. Parameters: d = 200, λ = 5 · 10−5. 6 Related Work 6.1 Word Embeddings Skip-Gram Negative Sampling was introduced in (Mikolov et al., 2013). The “negative sampling” approach is thoroughly described in (Goldberg and Levy, 2014), and the learning method is explained in (Rong, 2014). There are several open-source implementations of SGNS neural network, which is widely known as “word2vec”. 12 As shown in Section 2.2, Skip-Gram Negative Sampling optimization can be reformulated as a problem of searching for a low-rank matrix. In order to be able to use out-of-the-box SVD for this task, the authors of (Levy and Goldberg, 2014) used the surrogate version of SGNS as the objective function. There are two general assumptions made in their algorithm that distinguish it from the SGNS optimization: 1. SVD optimizes Mean Squared Error (MSE) objective instead of SGNS loss function. 2. In order to avoid infinite elements in SPMI matrix, it is transformed in ad-hoc manner (SPPMI matrix) before applying SVD. This makes the objective not interpretable in terms of the original task (3). As mentioned in (Levy and Goldberg, 2014), SGNS objective weighs different (w, c) pairs differently, unlike the SVD, which works with the same weight for all pairs and may entail the performance fall. The comprehensive explanation of the relation between SGNS and SVD-SPPMI methods is provided in (Keerthi et al., 2015). (Lai et al., 2015; Levy et al., 2015) 1Original Google word2vec: https://code. google.com/archive/p/word2vec/ 2Gensim word2vec: https://radimrehurek. com/gensim/models/word2vec.html give a good overview of highly practical methods to improve these word embedding models. 6.2 Riemannian Optimization An introduction to optimization over Riemannian manifolds can be found in (Udriste, 1994). The overview of retractions of high rank matrices to low-rank manifolds is provided in (Absil and Oseledets, 2015). The projector-splitting algorithm was introduced in (Lubich and Oseledets, 2014), and also was mentioned in (Absil and Oseledets, 2015) as “Lie-Trotter retraction”. Riemannian optimization is succesfully applied to various data science problems: for example, matrix completion (Vandereycken, 2013), largescale recommender systems (Tan et al., 2014), and tensor completion (Kressner et al., 2014). 7 Conclusions In our paper, we proposed the general two-step scheme of training SGNS word embedding model and introduced the algorithm that performs the search of a solution in the low-rank form via Riemannian optimization framework. We also demonstrated the superiority of our method by providing experimental comparison to existing state-of-the-art approaches. Possible direction of future work is to apply more advanced optimization techniques to the Step 1 of the scheme proposed in Section 1 and to explore the Step 2 — obtaining embeddings with a given low-rank matrix. Acknowledgments This research was supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001). 2035 References P-A Absil and Ivan V Oseledets. 2015. Low-rank retractions: a survey and new results. Computational Optimization and Applications 62(1):5–29. Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In NAACL. pages 19–27. AH Bentbib and A Kanber. 2015. Block power method for svd decomposition. Analele Stiintifice Ale Unversitatii Ovidius Constanta-Seria Matematica 23(2):45–58. Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Intell. Res.(JAIR) 49(1-47). Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In WWW. pages 406–414. Yoav Goldberg and Omer Levy. 2014. word2vec explained: deriving mikolov et al.’s negativesampling word-embedding method. arXiv preprint arXiv:1402.3722 . Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics . S Sathiya Keerthi, Tobias Schnabel, and Rajiv Khanna. 2015. Towards a better understanding of predict and count models. arXiv preprint arXiv:1511.02024 . Othmar Koch and Christian Lubich. 2007. Dynamical low-rank approximation. SIAM J. Matrix Anal. Appl. 29(2):434–454. Daniel Kressner, Michael Steinlechner, and Bart Vandereycken. 2014. Low-rank tensor completion by riemannian optimization. BIT Numerical Mathematics 54(2):447–468. Siwei Lai, Kang Liu, Shi He, and Jun Zhao. 2015. How to generate a good word embedding? arXiv preprint arXiv:1507.05523 . Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In NIPS. pages 2177–2185. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. ACL 3:211–225. Christian Lubich and Ivan V Oseledets. 2014. A projector-splitting integrator for dynamical lowrank approximation. BIT Numerical Mathematics 54(1):171–188. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. pages 3111–3119. Bamdev Mishra, Gilles Meyer, Silv`ere Bonnabel, and Rodolphe Sepulchre. 2014. Fixed-rank matrix factorizations and riemannian low-rank optimization. Computational Statistics 29(3-4):591–621. A Mukherjee, K Chen, N Wang, and J Zhu. 2015. On the degrees of freedom of reduced-rank estimators in multivariate regression. Biometrika 102(2):457– 477. Xin Rong. 2014. word2vec parameter learning explained. arXiv preprint arXiv:1411.2738 . Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In EMNLP. Mingkui Tan, Ivor W Tsang, Li Wang, Bart Vandereycken, and Sinno Jialin Pan. 2014. Riemannian pursuit for big matrix recovery. In ICML. volume 32, pages 1539–1547. Constantin Udriste. 1994. Convex functions and optimization methods on Riemannian manifolds, volume 297. Springer Science & Business Media. Bart Vandereycken. 2013. Low-rank matrix completion by riemannian optimization. SIAM Journal on Optimization 23(2):1214–1236. Ke Wei, Jian-Feng Cai, Tony F Chan, and Shingyu Leung. 2016. Guarantees of riemannian optimization for low rank matrix recovery. SIAM Journal on Matrix Analysis and Applications 37(3):1198–1222. 2036
2017
185
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2037–2048 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1186 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2037–2048 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1186 Deep Multitask Learning for Semantic Dependency Parsing Hao Peng∗ Sam Thomson† Noah A. Smith∗ ∗Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA †School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA {hapeng,nasmith}@cs.washington.edu, [email protected] Abstract We present a deep neural architecture that parses sentences into three semantic dependency graph formalisms. By using efficient, nearly arc-factored inference and a bidirectional-LSTM composed with a multi-layer perceptron, our base system is able to significantly improve the state of the art for semantic dependency parsing, without using hand-engineered features or syntax. We then explore two multitask learning approaches—one that shares parameters across formalisms, and one that uses higher-order structures to predict the graphs jointly. We find that both approaches improve performance across formalisms on average, achieving a new state of the art. Our code is open-source and available at https://github.com/ Noahs-ARK/NeurboParser. 1 Introduction Labeled directed graphs are a natural and flexible representation for semantics (Copestake et al., 2005; Baker et al., 2007; Surdeanu et al., 2008; Banarescu et al., 2013, inter alia). Their generality over trees, for instance, allows them to represent relational semantics while handling phenomena like coreference and coordination. Even syntactic formalisms are moving toward graphs (de Marneffe et al., 2014). However, full semantic graphs can be expensive to annotate, and efforts are fragmented across competing semantic theories, leading to a limited number of annotations in any one formalism. This makes learning to parse more difficult, especially for powerful but data-hungry machine learning techniques like neural networks. In this work, we hypothesize that the overlap among theories and their corresponding represenLast week , shareholders took their money and ran . arg1 loc top arg1 poss arg2 _and_c arg1 (a) DM arg1 top arg1 arg1 arg2 coord arg1 arg1 coord Last week , shareholders took their money ran . and (b) PAS Last week , shareholders took their money and ran . rstr twhen top act app pat conj act top twhen conj (c) PSD Figure 1: An example sentence annotated with the three semantic formalisms of the broad-coverage semantic dependency parsing shared tasks. tations can be exploited using multitask learning (Caruana, 1997), allowing us to learn from more data. We use the 2015 SemEval shared task on Broad-Coverage Semantic Dependency Parsing (SDP; Oepen et al., 2015) as our testbed. The shared task provides an English-language corpus with parallel annotations for three semantic graph representations, described in §2. Though the shared task was designed in part to encourage comparison between the formalisms, we are the first to treat SDP as a multitask learning problem. As a strong baseline, we introduce a new system that parses each formalism separately (§3). It uses a bidirectional-LSTM composed with a multi-layer perceptron to score arcs and predicates, and has efficient, nearly arc-factored inference. Experiments show it significantly improves on state-of-the-art methods (§3.4). We then present two multitask extensions (§4.2 2037 DM PAS PSD id ood id ood id ood # labels 59 47 42 41 91 74 % trees 2.3 9.7 1.2 2.4 42.2 51.4 % projective 2.9 8.8 1.6 3.5 41.9 54.4 Table 1: Graph statistics for in-domain (WSJ, “id”) and out-of-domain (Brown corpus, “ood”) data. Numbers taken from Oepen et al. (2015). and §4.3), with a parameterization and factorization that implicitly models the relationship between multiple formalisms. Experiments show that both techniques improve over our basic model, with an additional (but smaller) improvement when they are combined (§4.5). Our analysis shows that the improvement in unlabeled F1 is greater for the two formalisms that are more structurally similar, and suggests directions for future work. Finally, we survey related work (§5), and summarize our contributions and findings (§6). 2 Broad-Coverage Semantic Dependency Parsing (SDP) First defined in a SemEval 2014 shared task (Oepen et al., 2014), and then extended by Oepen et al. (2015), the broad-coverage semantic depency parsing (SDP) task is centered around three semantic formalisms whose annotations have been converted into bilexical dependencies. See Figure 1 for an example. The formalisms come from varied linguistic traditions, but all three aim to capture predicate-argument relations between content-bearing words in a sentence. While at first glance similar to syntactic dependencies, semantic dependencies have distinct goals and characteristics, more akin to semantic role labeling (SRL; Gildea and Jurafsky, 2002) or the abstract meaning representation (AMR; Banarescu et al., 2013). They abstract over different syntactic realizations of the same or similar meaning (e.g., “She gave me the ball.” vs. “She gave the ball to me.”). Conversely, they attempt to distinguish between different senses even when realized in similar syntactic forms (e.g., “I baked in the kitchen.” vs. “I baked in the sun.”). Structurally, they are labeled directed graphs whose vertices are tokens in the sentence. This is in contrast to AMR whose vertices are abstract concepts, with no explicit alignment to tokens, which makes parsing more difficult (Flanigan et al., 2014). Their arc labels encode broadlyapplicable semantic relations rather than being tailored to any specific downstream application or ontology.1 They are not necessarily trees, because a token may be an argument of more than one predicate (e.g., in “John wants to eat,” John is both the wanter and the would-be eater). Their analyses may optionally leave out non–contentbearing tokens, such as punctuation or the infinitival “to,” or prepositions that simply mark the type of relation holding between other words. But when restricted to content-bearing tokens (including adjectives, adverbs, etc.), the subgraph is connected. In this sense, SDP provides a whole-sentence analysis. This is in contrast to PropBank-style SRL, which gives an analysis of only verbal and nominal predicates (Palmer et al., 2005). Semantic dependency graphs also tend to have higher levels of nonprojectivity than syntactic trees (Oepen et al., 2014). Sentences with graphs containing cycles have been removed from the dataset by the organizers, so all remaining graphs are directed acyclic graphs. Table 1 summarizes some of the dataset’s high-level statistics. Formalisms. Following the SemEval shared tasks, we consider three formalisms. The DM (DELPH-IN MRS) representation comes from DeepBank (Flickinger et al., 2012), which are manually-corrected parses from the LinGO English Resource Grammar (Copestake and Flickinger, 2000). LinGO is a head-driven phrase structure grammar (HPSG; Pollard and Sag, 1994) with minimal recursion semantics (Copestake et al., 2005). The PAS (Predicate-Argument Structures) representation is extracted from the Enju Treebank, which consists of automatic parses from the Enju HPSG parser (Miyao, 2006). PAS annotations are also available for the Penn Chinese Treebank (Xue et al., 2005). The PSD (Prague Semantic Dependencies) representation is extracted from the tectogrammatical layer of the Prague Czech-English Dependency Treebank (Hajiˇc et al., 2012). PSD annotations are also available for a Czech translation of the WSJ Corpus. In this work, we train and evaluate only on English annotations. Of the three, PAS follows syntax most closely, and prior work has found it the easiest to predict. PSD has the largest set of labels, and parsers 1This may make another disambiguation step necessary to use these representations in a downstream task, but there is evidence that modeling semantic composition separately from grounding in any ontology is an effective way to achieve broad coverage (Kwiatkowski et al., 2013). 2038 shareholders took act shareholders took arg1 (a) First-order. shareholders took arg1 act shareholders took arg1 act (b) Second-order. shareholders took arg1 arg1 act (c) Third-order. Figure 2: Examples of local structures. We refer to the number of arcs that a structure contains as its order. have significantly lower performance on it (Oepen et al., 2015). 3 Single-Task SDP Here we introduce our basic model, in which training and prediction for each formalism is kept completely separate. We also lay out basic notation, which will be reused for our multitask extensions. 3.1 Problem Formulation The output of semantic dependency parsing is a labeled directed graph (see Figure 1). Each arc has a label from a predefined set L, indicating the semantic relation of the child to the head. Given input sentence x, let Y(x) be the set of possible semantic graphs over x. The graph we seek maximizes a score function S: ˆy = arg max y∈Y(x) S(x, y), (1) We decompose S into a sum of local scores s for local structures (or “parts”) p in the graph: S(x, y) = X p∈y s(p). (2) For notational simplicity, we omit the dependence of s on x. See Figure 2a for examples of local structures. s is a parameterized function, whose parameters (denoted Θ and suppressed here for clarity) will be learned from the training data (§3.3). Since we search over every possible labeled graph (i.e., considering each labeled arc for each pair of words), our approach can be considered a graph-based (or all-pairs) method. The models presented in this work all share this common graph-based approach, differing only in the set of structures they score and in the parameterization of the scoring function s. This approach also underlies state-of-the-art approaches to SDP (Martins and Almeida, 2014). 3.2 Basic Model Our basic model is inspired by recent successes in neural arc-factored graph-based dependency parsing (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017; Kuncoro et al., 2016). It borrows heavily from the neural arc-scoring architectures in those works, but decodes with a different algorithm under slightly different constraints. 3.2.1 Basic Structures Our basic model factors over three types of structures (p in Equation 2): • predicate, indicating a predicate word, denoted i→·; • unlabeled arc, representing the existence of an arc from a predicate to an argument, denoted i→j; • labeled arc, an arc labeled with a semantic role, denoted i ℓ→j. Here i and j are word indices in a given sentence, and ℓindicates the arc label. This list corresponds to the most basic structures used by Martins and Almeida (2014). Selecting an output y corresponds precisely to selecting which instantiations of these structures are included. To ensure the internal consistency of predictions, the following constraints are enforced during decoding: • i→· if and only if there exists at least one j such that i→j; • If i→j, then there must be exactly one label ℓ such that i ℓ→j. Conversely, if not i→j, then there must not exist any i ℓ→j; We also enforce a determinism constraint (Flanigan et al., 2014): certain labels must not appear on more than one arc emanating from the same token. The set of deterministic labels is decided based on their appearance in the training set. Notably, we do not enforce that the predicted graph is connected or spanning. If not for the predicate and determinism constraints, our model would be arc-factored, and decoding could be done for each i, j pair independently. Our structures do overlap though, and we employ AD3 (Martins et al., 2011) to find the highest-scoring internally consistent semantic graph. AD3 is an approximate discrete optimization algorithm based on dual decomposition. It can be used to decode factor graphs over discrete variables when scored structures overlap, as is the case here. 2039 3.2.2 Basic Scoring Similarly to Kiperwasser and Goldberg (2016), our model learns representations of tokens in a sentence using a bi-directional LSTM (BiLSTM). Each different type of structure (predicate, unlabeled arc, labeled arc) then shares these same BiLSTM representations, feeding them into a multilayer perceptron (MLP) which is specific to the structure type. We present the architecture slightly differently from prior work, to make the transition to the multitask scenario (§4) smoother. In our presentation, we separate the model into a function φ that represents the input (corresponding to the BiLSTM and the initial layers of the MLPs), and a function ψ that represents the output (corresponding to the final layers of the MLPs), with the scores given by their inner product.2 Distributed input representations. Long shortterm memory networks (LSTMs) are a variant of recurrent neural networks (RNNs) designed to alleviate the vanishing gradient problem in RNNs (Hochreiter and Schmidhuber, 1997). A bi-directional LSTM (BiLSTM) runs over the sequence in both directions (Schuster and Paliwal, 1997; Graves, 2012). Given an input sentence x and its corresponding part-of-speech tag sequence, each token is mapped to a concatenation of its word embedding vector and POS tag vector. Two LSTMs are then run in opposite directions over the input vector sequence, outputting the concatenation of the two hidden vectors at each position i: hi = −→ h i; ←− h i  (we omit hi’s dependence on x and its own parameters). hi can be thought of as an encoder that contextualizes each token conditioning on all of its context, without any Markov assumption. h’s parameters are learned jointly with the rest of the model (§3.3); we refer the readers to Cho (2015) for technical details. The input representation φ of a predicate structure depends on the representation of one word: φ(i→·) = tanh Cpredhi + bpred  . (3a) 2For clarity, we present single-layer BiLSTMs and MLPs, while in practice we use two layers for both. φ(i!·) E mbeddings B iLST M MLPs ⇥−! hi; − hi ⇤ ⇥ w ord vector; PO S vector ⇤ ⇥−! hj; − hj ⇤ φ(i!j) φ ! i `! j " (i!·) s(i!·) (i!j) s(i!j ! F irst-order scores s!i `! j" ! i `! j " 9 >>>= >>>; Indexed by labels O utput repr. Input repr. Figure 3: Illustration of the architecture of the basic model. i and j denote the indices of tokens in the given sentence. The figure depicts single-layer BiLSTM and MLPs, while in practice we use two layers for both. For unlabeled arc and labeled arc structures, it depends on both the head and the modifier (but not the label, which is captured in the distributed output representation): φ(i→j) = tanh CUA  hi; hj  + bUA  , (3b) φ(i ℓ→j) = tanh CLA  hi; hj  + bLA  . (3c) Distributed output representations. NLP researchers have found that embedding discrete output labels into a low dimensional real space is an effective way to capture commonalities among them (Srikumar and Manning, 2014; Hermann et al., 2014; FitzGerald et al., 2015, inter alia). In neural language models (Bengio et al., 2003; Mnih and Hinton, 2007, inter alia) the weights of the output layer could also be regarded as an output embedding. We associate each first-order structure p with a d-dimensional real vector ψ(p) which does not depend on particular words in p. Predicates and unlabeled arcs are each mapped to a single vector: ψ(i→·) = ψpred, (4a) ψ(i→j) = ψUA, (4b) and each label gets a vector: ψ(i ℓ→j) = ψLA(ℓ). (4c) Scoring. Finally, we use an inner product to score first-order structures: s(p) = φ(p) · ψ(p). (5) Figure 3 illustrates our basic model’s architecture. 2040 3.3 Learning The parameters of the model are learned using a max-margin objective. Informally, the goal is to learn parameters for the score function so that the gold parse is scored over every incorrect parse with a margin proportional to the cost of the incorrect parse. More formally, let D =  (xi, yi) N i=1 be the training set consisting of N pairs of sentence xi and its gold parse yi. Training is then the following ℓ2-regularized empirical risk minimization problem: min Θ λ 2 ∥Θ∥2 + 1 N N X i=1 L xi, yi; Θ  , (6) where Θ is all parameters in the model, and L is the structured hinge loss: L xi, yi; Θ  = max y∈Y(xi)  S xi, y  + c y, yi  −S xi, yi  . (7) c is a weighted Hamming distance that trades off between precision and recall (Taskar et al., 2004). Following Martins and Almeida (2014), we encourage recall over precision by using the costs 0.6 for false negative arc predictions and 0.4 for false positives. 3.4 Experiments We evaluate our basic model on the English dataset from SemEval 2015 Task 18 closed track.3 We split as in previous work (Almeida and Martins, 2015; Du et al., 2015), resulting in 33,964 training sentences from §00–19 of the WSJ corpus, 1,692 development sentences from §20, 1,410 sentences from §21 as in-domain test data, and 1,849 sentences sampled from the Brown Corpus as out-of-domain test data. The closed track differs from the open and gold tracks in that it does not allow access to any syntactic analyses. In the open track, additional machine generated syntactic parses are provided, while the gold-track gives access to various goldstandard syntactic analyses. Our model is evaluated with closed track data; it does not have access to any syntactic analyses during training or test. We refer the readers to §4.4 for implementation details, including training procedures, hyperparameters, pruning techniques, etc.. 3http://sdp.delph-in.net 4Paired bootstrap, p < 0.05 after Bonferroni correction. Model DM PAS PSD Avg. id Du et al., 2015 89.1 91.3 75.7 86.3 A&M, 2015 88.2 90.9 76.4 86.0 BASIC 89.4 92.2 77.6 87.4 ood Du et al., 2015 81.8 87.2 73.3 81.7 A&M, 2015 81.8 86.9 74.8 82.0 BASIC 84.5 88.3 75.3 83.6 Table 2: Labeled parsing performance (F1 score) on both in-domain (id) and out-of-domain (ood) test data. The last column shows the microaverage over the three tasks. Bold font indicates best performance without syntax. Underlines indicate statistical significance with Bonferroni (1936) correction compared to the best baseline system.4 Empirical results. As our model uses no explicit syntactic information, the most comparable models to ours are two state-of-the-art closed track systems due to Du et al. (2015) and Almeida and Martins (2015). Du et al. (2015) rely on graphtree transformation techniques proposed by Du et al. (2014), and apply a voting ensemble to wellstudied tree-oriented parsers. Closely related to ours is Almeida and Martins (2015), who used rich, hand-engineered second-order features and AD3 for inference. Table 2 compares our basic model to both baseline systems (labeled F1 score) on SemEval 2015 Task 18 test data. Scores of those systems are repeated from the official evaluation results. Our basic model significantly outperforms the best published results with a 1.1% absolute improvement on the in-domain test set and 1.6% on the out-ofdomain test set. 4 Multitask SDP We introduce two extensions to our single-task model, both of which use training data for all three formalisms to improve performance on each formalism’s parsing task. We describe a firstorder model, where representation functions are enhanced by parameter sharing while inference is kept separate for each task (§4.2). We then introduce a model with cross-task higher-order structures that uses joint inference across different tasks (§4.3). Both multitask models use AD3 for decoding, and are trained with the same marginbased objective, as in our single-task model. 2041 4.1 Problem Formulation We will use an additional superscript t ∈T to distinguish the three tasks (e.g., y(t), φ(t)), where T = {DM, PAS, PSD}. Our task is now to predict three graphs {y(t)}t∈T for a given input sentence x. Multitask SDP can also be understood as parsing x into a single unified multigraph y = S t∈T y(t). Similarly to Equations 1–2, we decompose y’s score S(x, y) into a sum of local scores for local structures in y, and we seek a multigraph ˆy that maximizes S(x, y). 4.2 Multitask SDP with Parameter Sharing A common approach when using BiLSTMs for multitask learning is to share the BiLSTM part of the model across tasks, while training specialized classifiers for each task (Søgaard and Goldberg, 2016). In this spirit, we let each task keep its own specialized MLPs, and explore two variants of our model that share parameters at the BiLSTM level. The first variant consists of a set of task-specific BiLSTM encoders as well as a common one that is shared across all tasks. We denote it FREDA. FREDA uses a neural generalization of “frustratingly easy” domain adaptation (Daum´e III, 2007; Kim et al., 2016), where one augments domainspecific features with a shared set of features to capture global patterns. Formally, let {h(t)}t∈T denote the three task-specific encoders. We introduce another encoder eh that is shared across all tasks. Then a new set of input functions {φ(t)}t∈T can be defined as in Equations 3a–3c, for example: φ(t)(i ℓ→j) = tanh C(t) LA  h(t) i ; h(t) j ; ehi; ehj  + b(t) LA  . (8) The predicate and unlabeled arc versions are analogous. The output representations {ψ(t)} remain task-specific, and the score is still the inner product between the input representation and the output representation. The second variant, which we call SHARED, uses only the shared encoder eh, and doesn’t use task-specific encoders {h(t)}. It can be understood as a special case of FREDA where the dimensions of the task-specific encoders are 0. 4.3 Multitask SDP with Cross-Task Structures In syntactic parsing, higher-order structures have commonly been used to model interactions between multiple adjacent arcs in the same dependency tree (Carreras, 2007; Smith and Eisner, 2008; Martins et al., 2009; Zhang et al., 2014, inter alia). Llu´ıs et al. (2013), in contrast, used second-order structures to jointly model syntactic dependencies and semantic roles. Similarly, we use higher-order structures across tasks instead of within tasks. In this work, we look at interactions between arcs that share the same head and modifier.5 See Figures 2b and 2c for examples of higher-order cross-task structures. Higher-order structure scoring. Borrowing from Lei et al. (2014), we introduce a low-rank tensor scoring strategy that, given a higher-order structure p, models interactions between the firstorder structures (i.e., arcs) p is made up of. This approach builds on and extends the parameter sharing techniques in §4.2. It can either follow FREDA or SHARED to get the input representations for first-order structures. We first introduce basic tensor notation. The order of a tensor is the number of its dimensions. The outer product of two vectors forms a secondorder tensor (matrix) where [u ⊗v]i,j = uivj. We denote the inner product of two tensors of the same dimensions by ⟨·, ·⟩, which first takes their element-wise product, then sums all the elements in the resulting tensor. For example, let p be a labeled third-order structure, including one labeled arc from each of the three different tasks: p = {p(t)}t∈T . Intuitively, s(p) should capture every pairwise interaction between the three input and three output representations of p. Formally, we want the score function to include a parameter for each term in the outer product of the representation vectors: s(p) = * W, O t∈T  φ(t)  p(t) ⊗ψ(t)  p(t)+ , (9) where W is a sixth-order tensor of parameters.6 With typical dimensions of representation vectors, this leads to an unreasonably large number of 5In the future we hope to model structures over larger motifs, both across and within tasks, to potentially capture when an arc in one formalism corresponds to a path in another formalism, for example. 6This is, of course, not the only way to model interactions between several representations. For instance, one could concatenate them and feed them into another MLP. Our preliminary experiments in this direction suggested that it may be less effective given a similar number of parameters, but we did not run full experiments. 2042 parameters. Following Lei et al. (2014), we upperbound the rank of W by r to limit the number of parameters (r is a hyperparameter, decided empirically). Using the fact that a tensor of rank at most r can be decomposed into a sum of r rank-1 tensors (Hitchcock, 1927), we reparameterize W to enforce the low-rank constraint by construction: W = r X j=1 O t∈T h U(t) LA i j,: ⊗ h V(t) LA i j,:  , (10) where U(t) LA, V(t) LA ∈Rr×d are now our parameters. [·]j,: denotes the jth row of a matrix. Substituting this back into Equation 9 and rearranging, the score function s(p) can then be rewritten as: r X j=1 Y t∈T h U(t) LAφ(t) p(t)i j h V(t) LAψ(t) p(t)i j . (11) We refer readers to Kolda and Bader (2009) for mathematical details. For labeled higher-order structures our parameters consist of the set of six matrices, {U(t) LA} ∪ {V(t) LA}. These parameters are shared between second-order and third-order labeled structures. Labeled second-order structures are scored as Equation 11, but with the product extending over only the two relevant tasks. Concretely, only four of the representation functions are used rather than all six, along with the four corresponding matrices from {U(t) LA} ∪{V(t) LA}. Unlabeled crosstask structures are scored analogously, reusing the same representations, but with a separate set of parameter matrices {U(t) UA} ∪{V(t) UA}. Note that we are not doing tensor factorization; we are learning U(t) LA, V(t) LA, U(t) UA, and V(t) UA directly, and W is never explicitly instantiated. Inference and learning. Given a sentence, we use AD3 to jointly decode all three formalisms.7 The training objective used for learning is the sum of the losses for individual tasks. 4.4 Implementation Details Each input token is mapped to a concatenation of three real vectors: a pre-trained word vector; a randomly-initialized word vector; and a randomlyinitialized POS tag vector.8 All three are updated 7Joint inference comes at a cost; our third-order model is able to decode roughly 5.2 sentences (i.e., 15.5 task-specific graphs) per second on a single Xeon E5-2690 2.60GHz CPU. 8There are minor differences in the part-of-speech data provided with the three formalisms. For the basic models, we Hyperparameter Value Pre-trained word embedding dimension 100 Randomly-initialized word embedding dimension 25 POS tag embedding dimension 25 Dimensions of representations φ and ψ 100 MLP layers 2 BiLSTM layers 2 BiLSTM dimensions 200 Rank of tensor r 100 α for word dropout 0.25 Table 3: Hyperparameters used in the experiments. during training. We use 100-dimensional GloVe (Pennington et al., 2014) vectors trained over Wikipedia and Gigaword as pre-trained word embeddings. To deal with out-of-vocabulary words, we apply word dropout (Iyyer et al., 2015) and randomly replace a word w with a special unksymbol with probability α 1+#(w), where #(w) is the count of w in the training set. Models are trained for up to 30 epochs with Adam (Kingma and Ba, 2015), with β1 = β2 = 0.9, and initial learning rate η0 = 10−3. The learning rate η is annealed at a rate of 0.5 every 10 epochs (Dozat and Manning, 2017). We apply early-stopping based on the labeled F1 score on the development set.9 We set the maximum number of iterations of AD3 to 500 and round decisions when it doesn’t converge. We clip the ℓ2 norm of gradients to 1 (Graves, 2013; Sutskever et al., 2014), and we do not use mini-batches. Randomly initialized parameters are sampled from a uniform distribution over  − p 6/(dr + dc), p 6/(dr + dc)  , where dr and dc are the number of the rows and columns in the matrix, respectively. An ℓ2 penalty of λ = 10−6 is applied to all weights. Other hyperparameters are summarized in Table 3. We use the same pruner as Martins and Almeida (2014), where a first-order feature-rich unlabeled pruning model is trained for each task, and arcs with posterior probability below 10−4 are discarded. We further prune labeled structures that appear less than 30 times in the training set. In the development set, about 10% of the arcs remain after pruning, with a recall of around 99%. use the POS tags provided with the respective dataset; for the multitask models, we use the (automatic) POS tags provided with DM. 9Micro-averaged labeled F1 for the multitask models. 2043 4.5 Experiments Experimental settings. We compare four multitask variants to the basic model, as well as the two baseline systems introduced in §3.4. • SHARED1 is a first-order model. It uses a single shared BiLSTM encoder, and keeps the inference separate for each task. • FREDA1 is a first-order model based on “frustratingly easy” parameter sharing. It uses a shared encoder as well as task-specific ones. The inference is kept separate for each task. • SHARED3 is a third-order model. It follows SHARED1 and uses a single shared BiLSTM encoder, but additionally employs cross-task structures and inference. • FREDA3 is also a third-order model. It combines FREDA1 and SHARED3 by using both “frustratingly easy” parameter sharing and cross-task structures and inference. In addition, we also examine the effects of syntax by comparing our models to the state-of-the-art open track system (Almeida and Martins, 2015).10 Main results overview. Table 4a compares our models to the best published results (labeled F1 score) on SemEval 2015 Task 18 in-domain test set. Our basic model improves over all closed track entries in all formalisms. It is even with the best open track system for DM and PSD, but improves on PAS and on average, without making use of any syntax. Three of our four multitask variants further improve over our basic model; SHARED1’s differences are statistically insignificant. Our best models (SHARED3, FREDA3) outperform the previous state-of-the-art closed track system by 1.7% absolute F1, and the best open track system by 0.9%, without the use of syntax. We observe similar trends on the out-of-domain test set (Table 4b), with the exception that, on PSD, our best-performing model’s improvement over the open-track system of Almeida and Martins (2015) is not statistically significant. The extent to which we might benefit from syntactic information remains unclear. With automatically generated syntactic parses, Almeida and Martins (2015) manage to obtain more than 1% absolute improvements over their closed track en10Kanerva et al. (2015) was the winner of the gold track, which overall saw higher performance than the closed and open tracks. Since gold-standard syntactic analyses are not available in most realistic scenarios, we do not include it in this comparison. DM PAS PSD Avg. Du et al., 2015 89.1 91.3 75.7 86.3 A&M, 2015 (closed) 88.2 90.9 76.4 86.0 A&M, 2015 (open)† 89.4 91.7 77.6 87.1 BASIC 89.4 92.2 77.6 87.4 SHARED1 89.7 91.9 77.8 87.4 FREDA1 90.0 92.3 78.1 87.7 SHARED3 90.3 92.5 78.5 88.0 FREDA3 90.4 92.7 78.5 88.0 (a) Labeled F1 score on the in-domain test set. DM PAS PSD Avg. Du et al., 2015 81.8 87.2 73.3 81.7 A&M, 2015 (closed) 81.8 86.9 74.8 82.0 A&M, 2015 (open)† 83.8 87.6 76.2 83.3 BASIC 84.5 88.3 75.3 83.6 SHARED1 84.4 88.1 75.4 83.5 FREDA1 84.9 88.3 75.8 83.9 SHARED3 85.3 88.4 76.1 84.1 FREDA3 85.3 89.0 76.4 84.4 (b) Labeled F1 score on the out-of-domain test set. Table 4: The last columns show the micro-average over the three tasks. † denotes the use of syntactic parses. Bold font indicates best performance among all systems, and underlines indicate statistical significance with Bonferroni correction compared to A&M, 2015 (open), the strongest baseline system. try, which is consistent with the extensive evaluation by Zhang et al. (2016), but we leave the incorporation of syntactic trees to future work. Syntactic parsing could be treated as yet another output task, as explored in Llu´ıs et al. (2013) and in the transition-based frameworks of Henderson et al. (2013) and Swayamdipta et al. (2016). Effects of structural overlap. We hypothesized that the overlap between formalisms would enable multitask learning to be effective; in this section we investigate in more detail how structural overlap affected performance. By looking at undirected overlap between unlabeled arcs, we discover that modeling only arcs in the same direction may have been a design mistake. DM and PAS are more structurally similar to each other than either is to PSD. Table 5 compares the structural similarities between the three for2044 Undirected Directed DM PAS PSD DM PAS PSD DM 67.2 56.8 64.2 26.1 PAS 70.0 54.9 66.9 26.1 PSD 57.4 56.3 26.4 29.6 Table 5: Pairwise structural similarities between the three formalisms in unlabeled F1 score. Scores from Oepen et al. (2015). DM PAS PSD UF LF UF LF UF LF FREDA1 91.7 90.4 93.1 91.6 89.0 79.8 FREDA3 91.9 90.8 93.4 92.0 88.6 80.4 Table 6: Unlabeled (UF) and labeled (LF) parsing performance of FREDA1 and FREDA3 on the development set of SemEval 2015 Task 18. malisms in unlabeled F1 score (each formalism’s gold-standard unlabeled graph is used as a prediction of each other formalism’s gold-standard unlabeled graph). All three formalisms have more than 50% overlap when ignoring arcs’ directions, but considering direction, PSD is clearly different; PSD reverses the direction about half of the time it shares an edge with another formalism. A concrete example can be found in Figure 1, where DM and PAS both have an arc from “Last” to “week,” while PSD has an arc from “week” to “Last.” We can compare FREDA3 to FREDA1 to isolate the effect of modeling higher-order structures. Table 6 shows performance on the development data in both unlabeled and labeled F1. We can see that FREDA3’s unlabeled performance improves on DM and PAS, but degrades on PSD. This supports our hypothesis, and suggests that in future work, a more careful selection of structures to model might lead to further improvements. 5 Related Work We note two important strands of related work. Graph-based parsing. Graph-based parsing was originally invented to handle non-projective syntax (McDonald et al., 2005; Koo et al., 2010; Martins et al., 2013, inter alia), but has been adapted to semantic parsing (Flanigan et al., 2014; Martins and Almeida, 2014; Thomson et al., 2014; Kuhlmann, 2014, inter alia). Local structure scoring was traditionally done with linear models over hand-engineered features, but lately, various forms of representation learning have been explored to learn feature combinations (Lei et al., 2014; Taub-Tabib et al., 2015; Pei et al., 2015, inter alia). Our work is perhaps closest to those who used BiLSTMs to encode inputs (Kiperwasser and Goldberg, 2016; Kuncoro et al., 2016; Wang and Chang, 2016; Dozat and Manning, 2017; Ma and Hovy, 2016). Multitask learning in NLP. There have been many efforts in NLP to use joint learning to replace pipelines, motivated by concerns about cascading errors. Collobert and Weston (2008) proposed sharing the same word representation while solving multiple NLP tasks. Zhang and Weiss (2016) use a continuous stacking model for POS tagging and parsing. Ammar et al. (2016) and Guo et al. (2016) explored parameter sharing for multilingual parsing. Johansson (2013) and Kshirsagar et al. (2015) applied ideas from domain adaptation to multitask learning. Successes in multitask learning have been enabled by advances in representation learning as well as earlier explorations of parameter sharing (Ando and Zhang, 2005; Blitzer et al., 2006; Daum´e III, 2007). 6 Conclusion We showed two orthogonal ways to apply deep multitask learning to graph-based parsing. The first shares parameters when encoding tokens in the input with recurrent neural networks, and the second introduces interactions between output structures across formalisms. Without using syntactic parsing, these approaches outperform even state-of-the-art semantic dependency parsing systems that use syntax. Because our techniques apply to labeled directed graphs in general, they can easily be extended to incorporate more formalisms, semantic or otherwise. In future work we hope to explore cross-task scoring and inference for tasks where parallel annotations are not available. Our code is opensource and available at https://github. com/Noahs-ARK/NeurboParser. Acknowledgements We thank the Ark, Maxwell Forbes, Luheng He, Kenton Lee, Julian Michael, and Jin-ge Yao for their helpful comments on an earlier version of this draft, and the anonymous reviewers for their valuable feedback. This work was supported by NSF grant IIS-1562364 and DARPA grant FA8750-122-0342 funded under the DEFT program. 2045 References Mariana S. C. Almeida and Andr´e F. T. Martins. 2015. Lisbon: Evaluating TurboSemanticParser on multiple languages and out-of-domain data. In Proc. of SemEval. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016. Many languages, one parser. TACL 4:431–444. Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR 6:1817–1853. Collin Baker, Michael Ellsworth, and Katrin Erk. 2007. SemEval’07 task 19: Frame semantic structure extraction. In Proc. of SemEval. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proc. of LAW VII & ID. Yoshua Bengio, R`ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. JMLR 3:1137–1155. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proc. of EMNLP. Carlo E. Bonferroni. 1936. Teoria statistica delle classi e calcolo delle probabilit`a. Pubblicazioni del R. Istituto Superiore di Scienze Economiche e Commerciali di Firenze 8:3–62. Xavier Carreras. 2007. Experiments with a higherorder projective dependency parser. In Proc. of CoNLL. Rich Caruana. 1997. Multitask learning. Machine Learning 28(1):41–75. Kyunghyun Cho. 2015. Natural language understanding with distributed representation. ArXiv:1511.07916. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. of ICML. Ann Copestake and Dan Flickinger. 2000. An open source grammar development environment and broad-coverage English grammar using HPSG. In Proc. of LREC. Ann Copestake, Dan Flickinger, Ivan A. Sag, and Carl Pollard. 2005. Minimal recursion semantics: An introduction. Research on Language & Computation 3(4):281–332. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proc. of ACL. Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D. Manning. 2014. Universal Stanford dependencies: A cross-linguistic typology. In Proc. of LREC. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proc. of ICLR. Yantao Du, Fan Zhang, Weiwei Sun, and Xiaojun Wan. 2014. Peking: Profiling syntactic tree parsing techniques for semantic graph parsing. In Proc. of SemEval. Yantao Du, Fan Zhang, Xun Zhang, Weiwei Sun, and Xiaojun Wan. 2015. Peking: Building semantic dependency graphs with a hybrid parser. In Proc. of SemEval. Nicholas FitzGerald, Oscar T¨ackstr¨om, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role labeling with neural network factors. In Proc. of EMNLP. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proc. of ACL. Daniel Flickinger, Yi Zhang, and Valia Kordoni. 2012. DeepBank: A dynamically annotated treebank of the Wall Street Journal. In Proc. of TLT. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics 28(3):245–288. Alex Graves. 2012. Supervised Sequence Labelling with Recurrent Neural Networks, volume 385 of Studies in Computational Intelligence. Springer. Alex Graves. 2013. Generating sequences with recurrent neural networks. ArXiv 1308.0850. Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2016. A universal framework for inductive transfer parsing across multi-typed treebanks. In Proc. of COLING. Jan Hajiˇc, Eva Hajiˇcov´a, Jarmila Panevov´a, Petr Sgall, Ondˇrej Bojar, Silvie Cinkov´a, Eva Fuˇc´ıkov´a, Marie Mikulov´a, Petr Pajas, Jan Popelka, Jiˇr´ı Semeck´y, Jana ˇSindlerov´a, Jan ˇStˇep´anek, Josef Toman, Zdeˇnka Ureˇsov´a, and Zdenˇek ˇZabokrtsk´y. 2012. Announcing Prague Czech-English dependency treebank 2.0. In Proc. LREC. James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multi-lingual joint parsing of syntactic and semantic dependencies with a latent variable model. Computational Linguistics 39(4):949–998. 2046 Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame identification with distributed word representations. In Proc. of ACL. Frank L. Hitchcock. 1927. The expression of a tensor or a polyadic as a sum of products. Journal of Mathematical Physics 6(1):164–189. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9(8):1735–1780. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proc. of ACL. Richard Johansson. 2013. Training parsers on incompatible treebanks. In Proc. of NAACL. Jenna Kanerva, Juhani Luotolahti, and Filip Ginter. 2015. Turku: Semantic dependency parsing as a sequence classification. In Proc. of SemEval. Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016. Frustratingly easy neural domain adaptation. In Proc. of COLING. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proc. of ICLR. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. TACL 4:313– 327. Tamara G. Kolda and Brett W. Bader. 2009. Tensor decompositions and applications. SIAM Review 51(3):455–500. Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proc. of EMNLP. Meghana Kshirsagar, Sam Thomson, Nathan Schneider, Jaime Carbonell, Noah A. Smith, and Chris Dyer. 2015. Frame-semantic role labeling with heterogeneous annotations. In Proc. of ACL. Marco Kuhlmann. 2014. Link¨oping: Cubic-time graph parsing with a simple scoring scheme. In Proc. of SemEval. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one MST parser. In Proc. of EMNLP. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke S. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proc. of EMNLP. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In Proc. of ACL. Xavier Llu´ıs, Xavier Carreras, and Llu´ıs M`arquez. 2013. Joint arc-factored parsing of syntactic and semantic dependencies. TACL 1:219–230. Xuezhe Ma and Eduard Hovy. 2016. Neural probabilistic model for non-projective MST parsing. ArXiv 1701.00874. Andr´e F. T. Martins and Mariana S. C. Almeida. 2014. Priberam: A turbo semantic parser with second order features. In Proc. of SemEval. Andr´e F. T. Martins, Miguel B. Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order non-projective turbo parsers. In Proc. of ACL. Andr´e F. T. Martins, Noah Smith, and Eric Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proc. of ACL. Andr´e F. T. Martins, Noah A. Smith, Pedro M. Q. Aguiar, and M´ario A. T. Figueiredo. 2011. Dual decomposition with many overlapping components. In Proc. of EMNLP. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proc. of ACL. Yusuke Miyao. 2006. From linguistic theory to syntactic analysis: Corpus-oriented grammar development and feature forest model. Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proc. of ICML. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajiˇc, and Zdeˇnka Ureˇsov´a. 2015. SemEval 2015 task 18: Broad-coverage semantic dependency parsing. In Proc. of SemEval. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajiˇc, Angelina Ivanova, and Yi Zhang. 2014. SemEval 2014 task 8: Broad-coverage semantic dependency parsing. In Proc. of SemEval. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics 31(1):71–106. Wenzhe Pei, Tao Ge, and Baobao Chang. 2015. An effective neural network model for graph-based dependency parsing. In Proc. of ACL. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proc. of EMNLP. 2047 Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. The University of Chicago Press. Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673–2681. David Smith and Jason Eisner. 2008. Dependency parsing by belief propagation. In Proc. of EMNLP. Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proc. of ACL. Vivek Srikumar and Christopher D Manning. 2014. Learning distributed representations for structured output prediction. In Proc. of NIPS. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. In Proc. of CoNLL. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proc. of NIPS. Swabha Swayamdipta, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Greedy, joint syntacticsemantic parsing with stack LSTMs. In Proc. of CoNLL. Ben Taskar, Carlos Guestrin, and Daphne Koller. 2004. Max-margin Markov networks. In Advances in Neural Information Processing Systems 16. Hillel Taub-Tabib, Yoav Goldberg, and Amir Globerson. 2015. Template kernels for dependency parsing. In Proc. of NAACL. Sam Thomson, Brendan O’Connor, Jeffrey Flanigan, David Bamman, Jesse Dodge, Swabha Swayamdipta, Nathan Schneider, Chris Dyer, and Noah A. Smith. 2014. CMU: Arc-factored, discriminative semantic dependency parsing. In Proc. of SemEval. Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional LSTM. In Proc. of ACL. Naiwen Xue, Fei Xia, Fu-dong Chiou, and Martha Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural Language Engineering 11(2):207–238. Xun Zhang, Yantao Du, Weiwei Sun, and Xiaojun Wan. 2016. Transition-based parsing for deep dependency structures. Computational Linguistics 42(3):353–389. Yuan Zhang, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2014. Greed is good if randomized: New inference for dependency parsing. In Proc. of EMNLP. Yuan Zhang and David Weiss. 2016. Stackpropagation: Improved representation learning for syntax. In Proc. of ACL. 2048
2017
186
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2049–2058 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1187 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2049–2058 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1187 Improved Word Representation Learning with Sememes Yilin Niu1∗, Ruobing Xie1∗, Zhiyuan Liu1,2 †, Maosong Sun1,2 1 Department of Computer Science and Technology, State Key Lab on Intelligent Technology and Systems, National Lab for Information Science and Technology, Tsinghua University, Beijing, China 2 Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou 221009 China Abstract Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed by several sememes. Since sememes are not explicit for each word, people manually annotate word sememes and form linguistic common-sense knowledge bases. In this paper, we present that, word sememe information can improve word representation learning (WRL), which maps words into a low-dimensional semantic space and serves as a fundamental step for many NLP tasks. The key idea is to utilize word sememes to capture exact meanings of a word within specific contexts accurately. More specifically, we follow the framework of Skip-gram and present three sememe-encoded models to learn representations of sememes, senses and words, where we apply the attention scheme to detect word senses in various contexts. We conduct experiments on two tasks including word similarity and word analogy, and our models significantly outperform baselines. The results indicate that WRL can benefit from sememes via the attention scheme, and also confirm our models being capable of correctly modeling sememe information. 1 Introduction Sememes are defined as minimum semantic units of word meanings, and there exists a limited close set of sememes to compose the semantic meanings of an open set of concepts (i.e. word sense). However, sememes are not explicit ∗indicates equal contribution †Corresponding author: Z. Liu ([email protected]) for each word. Hence, people manually annotate word sememes and build linguistic common-sense knowledge bases. HowNet (Dong and Dong, 2003) is one of such knowledge bases, which annotates each concept in Chinese with one or more relevant sememes. Different from WordNet (Miller, 1995), the philosophy of HowNet emphasizes the significance of part and attribute represented by sememes. HowNet has been widely utilized in word similarity computation (Liu and Li, 2002) and sentiment analysis (Xianghua et al., 2013), and in section 3.2 we will give a detailed introduction to sememes, senses and words in HowNet. In this paper, we aim to incorporate word sememes into word representation learning (WRL) and learn improved word embeddings in a lowdimensional semantic space. WRL is a fundamental and critical step in many NLP tasks such as language modeling (Bengio et al., 2003) and neural machine translation (Sutskever et al., 2014). There have been a lot of researches for learning word representations, among which word2vec (Mikolov et al., 2013) achieves a nice balance between effectiveness and efficiency. In word2vec, each word corresponds to one single embedding, ignoring the polysemy of most words. To address this issue, (Huang et al., 2012) introduces a multiprototype model for WRL, conducting unsupervised word sense induction and embeddings according to context clusters. (Chen et al., 2014) further utilizes the synset information in WordNet to instruct word sense representation learning. From these previous studies, we conclude that word sense disambiguation are critical for WRL, and we believe that the sememe annotation of word senses in HowNet can provide necessary semantic regularization for the both tasks. To explore its feasibility, we propose a novel Sememe-Encoded Word Representation Learning 2049 (SE-WRL) model, which detects word senses and learns representations simultaneously. More specifically, this framework regards each word sense as a combination of its sememes, and iteratively performs word sense disambiguation according to their contexts and learn representations of sememes, senses and words by extending Skip-gram in word2vec (Mikolov et al., 2013). In this framework, an attention-based method is proposed to select appropriate word senses according to contexts automatically. To take full advantages of sememes, we propose three different learning and attention strategies for SE-WRL. In experiments, we evaluate our framework on two tasks including word similarity and word analogy, and further conduct case studies on sememe, sense and word representations. The evaluation results show that our models outperform other baselines significantly, especially on word analogy. This indicates that our models can build better knowledge representations with the help of sememe information, and also implies the potential of our models on word sense disambiguation. The key contributions of this work are concluded as follows: (1) To the best of our knowledge, this is the first work to utilize sememes in HowNet to improve word representation learning. (2) We successfully apply the attention scheme to detect word senses and learn representations according to contexts with the favor of the sememe annotation in HowNet. (3) We conduct extensive experiments and verify the effectiveness of incorporating word sememes for improved WRL. 2 Related Work 2.1 Word Representation Recent years have witnessed the great thrive in word representation learning. It is simple and straightforward to represent words using one-hot representations, but it usually struggles with the data sparsity issue and the neglect of semantic relations between words. To address these issues, (Rumelhart et al., 1988) proposes the idea of distributed representation which projects all words into a continuous low-dimensional semantic space, considering each word as a vector. Distributed word representations are powerful and have been widely utilized in many NLP tasks, including neural language models (Bengio et al., 2003; Mikolov et al., 2010), machine translation (Sutskever et al., 2014; Bahdanau et al., 2015), parsing (Chen and Manning, 2014) and text classification (Zhang et al., 2015). Word distributed representations are capable of encoding semantic meanings in vector space, serving as the fundamental and essential inputs of many NLP tasks. There are large amounts of efforts devoted to learning better word representations. As the exponential growth of text corpora, model efficiency becomes an important issue. (Mikolov et al., 2013) proposes two models, CBOW and Skipgram, achieving a good balance between effectiveness and efficiency. These models assume that the meanings of words can be well reflected by their contexts, and learn word representations by maximizing the predictive probabilities between words and their contexts. (Pennington et al., 2014) further utilizes matrix factorization on word affinity matrix to learn word representations. However, these models merely arrange only one vector for each word, regardless of the fact that many words have multiple senses. (Huang et al., 2012; Tian et al., 2014) utilize multi-prototype vector models to learn word representations and build distinct vectors for each word sense. (Neelakantan et al., 2015) presents an extension to Skip-gram model for learning non-parametric multiple embeddings per word. (Rothe and Sch¨utze, 2015) also utilizes an Autoencoder to jointly learn word, sense and synset representations in the same semantic space. This paper, for the first time, jointly learns representations of sememes, senses and words. The sememe annotation in HowNet provides useful semantic regularization for WRL. Moreover, the unified representations incorporated with sememes also provide us more explicit explanations of both word and sense embeddings. 2.2 Word Sense Disambiguation and Representation Learning Word sense disambiguation (WSD) aims to identify word senses or meanings in a certain context computationally. There are mainly two approaches for WSD, namely the supervised methods and the knowledge-based methods. Supervised methods usually take the surrounding words or senses as features and use classifiers like SVM for word sense disambiguation (Lee et al., 2004), which are intensively limited to the time-consuming human annotation of training data. On contrary, knowledge-based methods utilize 2050 large external knowledge resources such as knowledge bases or dictionaries to suggest possible senses for a word. (Banerjee and Pedersen, 2002) exploits the rich hierarchy of semantic relations in WordNet (Miller, 1995) for an adapted dictionarybased WSD algorithm. (Bordes et al., 2011) introduces synset information in WordNet to WRL. (Chen et al., 2014) considers synsets in WordNet as different word senses, and jointly conducts word sense disambiguation and word / sense representation learning. (Guo et al., 2014) considers bilingual datasets to learn sense-specific word representations. Moreover, (Jauhar et al., 2015) proposes two approaches to learn sense-specific word representations that are grounded to ontologies. (Pilehvar and Collier, 2016) utilizes personalized PageRank to learn de-conflated semantic representations of words. In this paper, we follow the knowledge-based approach and automatically detect word senses according to the contexts with the favor of sememe information in HowNet. To the best of our knowledge, this is the first attempt to apply attentionbased models to encode sememe information for word representation learning. 3 Methodology In this section, we present our framework Sememe-Encoded WRL (SE-WRL) that considers sememe information for word sense disambiguation and representation learning. Specifically, we learn our models on a large-scale text corpus with the semantic regularization of the sememe annotation in HowNet and obtain sememe, sense and word embeddings for evaluation tasks. In the following sections, we first introduce HowNet and the structures of sememes, senses and words. Then we discuss the conventional WRL model Skip-gram that we utilize for the sememeencoded framework. Finally, we propose three sememe-encoded models in details. 3.1 Sememes, Senses and Words in HowNet In this section, we first introduce the arrangement of sememes, senses and words in HowNet. HowNet annotates precise senses to each word, and for each sense, HowNet annotates the significance of parts and attributes represented by sememes. Fig. 1 gives an example of sememes, senses and words in HowNet. The first layer represents the word “apple”. The word “apple” actually has two main senses shown on the second layer: one is a sort of juicy fruit (apple), and another is a famous computer brand (Apple brand). The third and following layers are those sememes explaining each sense. For instance, the first sense Apple brand indicates a computer brand, and thus has sememes computer, bring and SpeBrand. From Fig. 1 we can find that, sememes of many senses in HowNet are annotated with various relations, such as define and modifier, and form complicated hierarchical structures. In this paper, for simplicity, we only consider all annotated sememes of each sense as a sememe set without considering their internal structure. HowNet assumes the limited annotated sememes can well represent senses and words in the real-world scenario, and thus sememes are expected to be useful for both WSD and WRL. define define modifier modifier sense1(Apple brand) sense1(Apple brand) sense2(apple) sense2(apple) 电脑 (computer) 电脑 (computer) 水果 (fruit) 水果 (fruit) 苹果 (Apple brand/apple) 苹果 (Apple brand/apple) 样式值 (PatternValue) 样式值 (PatternValue) 能 (able) 能 (able) 携带 (bring) 携带 (bring) 特定牌子 (SpeBrand) 特定牌子 (SpeBrand) Figure 1: Examples of sememes, senses and words. We introduce the notions utilized in the following sections as follows. We define the overall sememe, sense and word sets used in training as X, S and W respectively. For each w ∈W, there are possible multiple senses s(w) i ∈S(w) where S(w) represents the sense set of w. Each sense s(w) i consists of several sememes x(si) j ∈X(w) i . For each target word w in a sequential plain text, C(w) represents its context word set. 3.2 Conventional Skip-gram Model We directly utilize the widely-used model Skipgram to implement our SE-WRL model, because Skip-gram has well balanced effectiveness as well as efficiency (Mikolov et al., 2013). The standard skip-gram model assumes that word embeddings should relate to their context words. It aims at 2051 maximizing the predictive probability of context words conditioned on the target word w. Formally, we utilize a sliding window to select the context word set C(w). For a word sequence H = {w1, · · · , wn}, Skip-gram model intends to maximize: L(H) = n−K X i=K log Pr(wi−K, · · · , wi+K|wi), (1) where K is the size of sliding window. Pr(wi−K, · · · , wi+K|wi) represents the predictive probability of context words conditioned on the target word wi, formalized by the following softmax function: Pr(wi−K, · · · , wi+K|wi) = Y wc∈C(wi) Pr(wc|wi) = Y wc∈C(wi) exp(w⊤ c · wi) P w′ i∈W exp(w⊤ c · w′ i), (2) in which wc and wi stand for embeddings of context word wc ∈C(wi) and target word wi respectively. We can also follow the strategies of hierarchical softmax and negative sampling proposed in (Mikolov et al., 2013) to accelerate the calculation of softmax. 3.3 SE-WRL Model In this section, we introduce the SE-WRL models with three different strategies to utilize sememe information, including Simple Sememe Aggregation Model (SSA), Sememe Attention over Context Model (SAC) and Sememe Attention over Target Model (SAT). 3.3.1 Simple Sememe Aggregation Model The Simple Sememe Aggregation Model (SSA) is a straightforward idea based on Skip-gram model. For each word, SSA considers all sememes in all senses of the word together, and represents the target word using the average of all its sememe embeddings. Formally, we have: w = 1 m X s(w) i ∈S(w) X x(si) j ∈X(w) i x(si) j , (3) which means the word embedding of w is composed by the average of all its sememe embeddings. Here, m stands for the overall number of sememes belonging to w. This model simply follows the assumption that, the semantic meaning of a word is composed of the semantic units, i.e., sememes. As compared to the conventional Skip-gram model, since sememes are shared by multiple words, this model can utilize sememe information to encode latent semantic correlations between words. In this case, similar words that share the same sememes may finally obtain similar representations. 3.3.2 Sememe Attention over Context Model The SSA Model replaces the target word embedding with the aggregated sememe embeddings to encode sememe information into word representation learning. However, each word in SSA model still has only one single representation in different contexts, which cannot deal with polysemy of most words. It is intuitive that we should construct distinct embeddings for a target word according to specific contexts, with the favor of word sense annotation in HowNet. To address this issue, we come up with the Sememe Attention over Context Model (SAC). SAC utilizes the attention scheme to automatically select appropriate senses for context words according to the target word. That is, SAC conducts word sense disambiguation for context words to learn better representations of target words. The structure of the SAC model is shown in Fig. 2. Wt Wt-2 Wt-1 Wt+1 Wt+2 att3 att2 att1 S3 S2 S1 context word sense sememe attention Figure 2: Sememe Attention over Context Model. More specifically, we utilize the original word embedding for target word w, but use sememe embeddings to represent context word wc instead of original context word embeddings. Suppose a word typically demonstrates some specific senses in one sentence. Here we employ the target word embedding as an attention to select the most appropriate senses to make up context word embeddings. We formalize the context word embedding 2052 wc as follows: wc = |S(wc)| X j=1 att(s(wc) j ) · s(wc) j , (4) where s(wc) j stands for the j-th sense embedding of wc, and att(s(wc) j ) represents the attention score of the j-th sense with respect to the target word w, defined as follows: att(s(wc) j ) = exp(w · ˆs(wc) j ) P|S(wc)| k=1 exp(w · ˆs(wc) k ) . (5) Note that, when calculating attention, we use the average of sememe embeddings to represent each sense s(wc) j : ˆs(wc) j = 1 |X(wc) j | |X(wc) j | X k=1 x(sj) k . (6) The attention strategy assumes that the more relevant a context word sense embedding is to the target word w, the more this sense should be considered when building context word embeddings. With the favor of attention scheme, we can represent each context word as a particular distribution over its sense. This can be regarded as soft WSD. As shown in experiments, it will help learn better word representations. 3.3.3 Sememe Attention over Target Model The Sememe Attention over Context Model can flexibly select appropriate senses and sememes for context words according to the target word. The process can also be applied to select appropriate senses for the target word, by taking context words as attention. Hence, we propose the Sememe Attention over Target Model (SAT) as shown in Fig. 3. Wt Wt-2 Wt-1 Wt+1 Wt+2 contextual embedding att1 att2 att3 S1 S2 S3 context word sense sememe Figure 3: Sememe Attention over Target Model. Different from SAC model, SAT learns the original word embeddings for context words, but sememe embeddings for target words. We apply context words as attention over multiple senses of the target word w to build the embedding of w, formalized as follows: w = |S(w)| X j=1 att(s(w) j ) · s(w) j , (7) where s(w) j stands for the j-th sense embedding of w, and the context-based attention is defined as follows: att(s(w) j ) = exp(w′ c · ˆs(w) j ) P|S(w)| k=1 exp(w′c · ˆs(w) k ) , (8) where, similar to Eq. (6), we also use the average of sememe embeddings to represent each sense s(w) j . Here, w′ c is the context embedding, consisting of a constrained window of word embeddings in C(wi). We have: w′ c = 1 2K′ k=i+K′ X k=i−K′ wk, k ̸= i. (9) Note that, since in experiment we find the sense selection of the target word only relies on more limited context words for calculating attention, hence we select a smaller K′ as compared to K. Recall that, SAC only uses one target word as attention to select senses of context words, but SAT uses several context words together as attention to select appropriate senses of target words. Hence SAT is expected to conduct more reliable WSD and result in more accurate word representations, which will be explored in experiments. 4 Experiments In this section, we evaluate the effectiveness of our SE-WRL1 models on two tasks including word similarity and word analogy, which are two classical evaluation tasks mainly focusing on evaluating the quality of learned word representations. We also explore the potential of our models in word sense disambiguation with case study, showing the power of our attention-based models. 1https://github.com/thunlp/SE-WRL 2053 4.1 Dataset We use the web pages in Sogou-T2 as the text corpus to learn WRL models. Sogou-T is provided by a Chinese commercial search engine, which contains 2.7 billion words in total. We also utilize the sememe annotation in HowNet. The number of distinct sememes used in this paper is 1, 889. The average senses for each word are about 2.4, while the average sememes for each sense are about 1.6. Throughout the Sogou-T corpus, we find that 42.2% of words have multiple senses. This indicates the significance of WSD. For evaluation, we choose wordsim-240 and wordsim-2973 to evaluate the performance of word similarity computation. The two datasets both contain frequently-used Chinese word pairs with similarity scores annotated manually. We choose the Chinese Word Analogy dataset proposed by (Chen et al., 2015) to evaluate the performance of word analogy inference, that is, w(“king”) −w(“man”) ≃w(“queen”) − w(“woman”). 4.2 Experimental Settings We evaluate three SE-WRL models including SSA, SAC and SAT on all tasks. As for baselines, we consider three conventional WRL models including Skip-gram, CBOW and GloVe. For Skipgram and CBOW, we directly use the code released by Google (Mikolov et al., 2013). GloVe is proposed by (Pennington et al., 2014), which seeks the advantages of the WRL models based on statistics and those based on prediction. Moreover, we propose another model, Maximum Selection over Target Model (MST), for further comparison inspired by (Chen et al., 2014). It represents the current word embeddings with only the most probable sense according to the contexts, instead of viewing a word as a particular distribution over all its senses similar to that of SAT. For a fair comparison, we train these models with the same experimental settings and with their best parameters. As for the parameter settings, we set the context window size K = 8 as the upper bound, and during training, the window size is dynamically selected ranging from 1 to 8 randomly. We set the dimensions of word, sense and sememe embeddings to be the same 200. For 2https://www.sogou.com/labs/resource/ t.php 3https://github.com/Leonard-Xu/CWE/ tree/master/data learning rate α, its initial value is 0.025 and will descend through iterations. We set the number of negative samples to be 25. We also set a lower bound of word frequency as 50, and in the training set, those words less frequent than this bound will be filtered out. For SAT, we set K′ = 2. 4.3 Word Similarity The task of word similarity aims to evaluate the quality of word representations by comparing the similarity ranks of word pairs computed by WRL models with the ranks given by dataset. WRL models typically compute word similarities according to their distances in the semantic space. 4.3.1 Evaluation Protocol In experiments, we choose the cosine similarity between two word embeddings to rank word pairs. For evaluation, we compute the Spearman correlation between the ranks of models and the ranks of human judgments. Model Wordsim-240 Wordsim-297 CBOW 57.7 61.1 GloVe 59.8 58.7 Skip-gram 58.5 63.3 SSA 58.9 64.0 SAC 59.0 63.1 MST 59.2 62.8 SAT 63.2 65.6 Table 1: Evaluation results of word similarity computation. 4.3.2 Experiment Results Table 1 shows the results of these models for word similarity computation. From the results we can observe that: (1) Our SAT model outperforms other models, including all baselines, on both two test sets. This indicates that, by utilizing sememe annotation properly, our model can better capture the semantic relations of words, and learn more accurate word embeddings. (2) The SSA model represents a word with the average of its sememe embeddings. In general, SSA model performs slightly better than baselines, which tentatively proves that sememe information is helpful. The reason is that words which share common sememe embeddings will benefit from each other. Especially, those words with lower frequency, which cannot be learned sufficiently using conventional WRL models, in contrast, can 2054 Model Accuracy Mean Rank Capital City Relationship All Capital City Relationship All CBOW 49.8 85.7 86.0 64.2 36.98 1.23 62.64 37.62 GloVe 57.3 74.3 81.6 65.8 19.09 1.71 3.58 12.63 Skip-gram 66.8 93.7 76.8 73.4 137.19 1.07 2.95 83.51 SSA 62.3 93.7 81.6 71.9 45.74 1.06 3.33 28.52 SAC 61.6 95.4 77.9 70.8 19.08 1.02 2.18 12.18 MST 65.7 95.4 82.7 74.5 50.29 1.05 2.48 31.05 SAT 83.2 98.9 82.4 85.3 14.42 1.01 2.63 9.48 Table 2: Evaluation results of word analogy inference. obtain better word embeddings from SSA simply because their sememe embeddings can be trained sufficiently through other words. (3) The SAT model performs much better than SSA and SAC. This indicates that SAT can obtain more precise sense distribution of a word. The reason has been mentioned above that, different from SAC using only one target word as attention for WSD, SAT adopts richer contextual information as attention for WSD. (4) SAT works better than MST, and we can conclude that a soft disambiguation over senses prevents inevitable errors when selecting only one most-probable sense. The result makes sense because, for many words, their various senses are not always entirely different from each other, but share some common elements. In some contexts, a single sense may not convey the exact meaning of this word. 4.4 Word Analogy Word analogy inference is another widely-used task to evaluate the quality of WRL models (Mikolov et al., 2013). 4.4.1 Evaluation Protocol The dataset proposed by (Chen et al., 2015) consists of 1, 124 analogies, which contains three analogy types: (1) capitals of countries (Capital), 677 groups; (2) states/provinces of cities (City), 175 groups; (3) family words (Relationship), 272 groups. Given an analogy group of words (w1, w2, w3, w4), WRL models usually get w2−w1+w3 equal to w4. Hence for word analogy inference, we suppose w4 is missing, and WRL models will rank all candidate words according to their scores as follows: R(w) = cos(w2 −w1 + w3, w), (10) and select the top-ranked word as the answer. For word analogy inference, we consider two evaluation metrics: (1) Accuracy. For each analogy group, a WRL model selects the top-ranked word w = arg maxw R(w), which is judged as positive if w = w4. The percentage of positive samples is regarded as the accuracy score for this WRL model. (2) Mean Rank. For each analogy group, a WRL model will assign a rank for the gold standard word w4 according to the scores computed by Eq. (10). We use the mean rank of all gold standard words as the evaluation metric. 4.4.2 Experiment Results Table 2 shows the evaluation results of these models for word analogy inference. From the table, we can observe that: (1) The SAT model performs best among all models, and the superiority is more significant than that on word similarity computation. This indicates that SAT will enhance the modeling of implicit relations between word embeddings in the semantic space. The reason is that sememes annotated to word senses have encoded these word relations. For example, capital and Cuba are two sememes of the word “Havana”, which provide explicit semantic relations between the words “Cuba” and “Havana”. (2) The SAT model does well on both classes of Capital and City, because some words in these classes have low frequencies, while their sememes occur so many times that sememe embeddings can be learned sufficiently. With these sememe embeddings, these low-frequent words can be learned more efficiently by SAT. (3) It seems that CBOW works better than SAT on Relationship class. Whereas for the mean rank, CBOW gets the worst results, which indicates the performance of CBOW is unstable. On the contrary, although the accuracy of SAT is a bit lower than that of CBOW, SAT seldom gives an outrageous prediction. In most wrong cas2055 Word: °J(“Apple brand/apple”) sense1: Apple brand (computer, PatternValue, able, bring, SpeBrand) sense2: duct (fruit) ° ° °J J J ƒkJ¥{¡£Apple is always famous as the king of fruits¤ Apple brand: 0.28 apple: 0.72 ° ° °J J J >MÃ{~éÄ£The Apple brand computer can not startup normally¤ Apple brand: 0.87 apple: 0.13 Word: *Ñ(“proliferate/metastasize”) sense1: proliferate (disperse) sense2: metastasize (disperse, disease) “޼œ* * *Ñ Ñ Ñ £Prevent epidemic from metastasizing¤ proliferate: 0.06 metastasize: 0.94 Ø* * *Ñ Ñ Ñ ØÉì^£Treaty on the Non-Proliferation of Nuclear Weapons¤ proliferate: 0.68 metastasize: 0.32 Word: èÎ(“contingent/troops”) sense1: contingent (community) sense2: troops (army) l|è è èÎ Î Î ?\1ãìNm£Eight contingents enter the second stage of team competition¤ contingent: 0.90 troops: 0.10 úSÄè è èÎ Î Î |„ï£Construct the organization of public security’s troops in grass-roots unit¤ contingent: 0.15 troops: 0.85 Table 3: Examples of sememes, senses and words in context with attention. es, SAT predicts the word “grandfather” instead of “grandmother”, which is not completely nonsense, because in HowNet the words “grandmother”, “grandfather”, “grandma” and some other similar words share four common sememes while only one sememe of them are different. These similar sememes make the attention process less discriminative with each other. But for the wrong cases of CBOW, we find that many mistakes are about words with low frequencies, such as “stepdaughter” which occurs merely for 358 times. Considering sememes may relieve this problem. 4.5 Case study The above experiments verify the effectiveness of our models for WRL. Here we show some examples of sememes, senses and words for case study. 4.5.1 Word Sense Disambiguation To demonstrate the validity of Sememe Attention, we select three attention results in training set, as shown in Table 3. In this table, the first rows of three examples are word-sense-sememe structures of each word. For instance, in the third example, the word has two senses, contingent and troops; contingent has one sememe community, while troops has one sememe army. The three examples all indicate that our models can estimate appropriate distributions of senses for a word given a context. 4.5.2 Effect of Context Words for Attention We demonstrate the effect of context words for attention in Table. 4. The word “Havana” consists of four sememes, among which two sememes capital and Cuba describe distinct attributes of the word from different aspects. Word M@(“Havana”) Sememe IÑ(capital) n(Cuba) n(“Cuba”) 0.39 0.42 Ûd(“Russia”) 0.39 -0.09 È„(“cigar”) 0.00 0.36 Table 4: Sememe weight for computing attention. Here, we list three different context words “Cuba”, “Russia” and “cigar”. Given the context word “Cuba”, both sememes get high weights, indicating their contributions to the meaning of “Havana” in this context. The context word “Russia” is more relevant to the sememe capital. When the context word is “cigar”, the sememe Cuba has more influence, because cigar is a famous specialty of Cuba. From these examples, we can conclude that our Sememe Attention can accurately capture the word meanings in complicated contexts. 5 Conclusion and Future Work In this paper, we propose a novel method to model sememe information for learning better word representations. Specifically, we utilize sememe information to represent various senses of each word and propose Sememe Attention to select appropriate senses in contexts automatically. We evaluate our models on word similarity and word analogy, and results show the advantages of our SememeEncoded WRL models. We also analyze several cases in WSD and WRL, which confirms our models are capable of selecting appropriate word senses with the favor of sememe attention. We will explore the following research directions in future: (1) The sememe information in HowNet is annotated with hierarchical structure 2056 and relations, which have not been considered in our framework. We will explore to utilize these annotations for better WRL. (2) We believe the idea of sememes is universal and could be wellfunctioned beyond languages. We will explore the effectiveness of sememe information for WRL in other languages. Acknowledgments This work is supported by the 973 Program (No. 2014CB340501), the National Natural Science Foundation of China (NSFC No. 61572273, 61661146007), and the Key Technologies Research and Development Program of China (No. 2014BAK04B03). This work is also funded by the Natural Science Foundation of China (NSFC) and the German Research Foundation (DFG) in Project Crossmodal Learning, NSFC 61621136008 / DFC TRR-169. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Satanjeev Banerjee and Ted Pedersen. 2002. An adapted lesk algorithm for word sense disambiguation using wordnet. In Proceedings of CICLing. pages 136–145. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. JMLR 3:1137–1155. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In Conference on Artificial Intelligence. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP. pages 740–750. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of EMNLP. pages 1025–1035. Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huan-Bo Luan. 2015. Joint learning of character and word embeddings. In Proceedings of IJCAI. pages 1236–1242. Zhendong Dong and Qiang Dong. 2003. Hownet-a hybrid language and knowledge resource. In Proceedings of NLP-KE. IEEE, pages 820–824. Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning sense-specific word embeddings by exploiting bilingual resources. In Proceedings of COLING. pages 497–507. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of ACL. pages 873–882. Sujay Kumar Jauhar, Chris Dyer, and Eduard Hovy. 2015. Ontologically grounded multi-sense representation learning for semantic vector space models. In Proceedings of NAACL. volume 1. Yoong Keok Lee, Hwee Tou Ng, and Tee Kiah Chia. 2004. Supervised word sense disambiguation with support vector machines and multiple knowledge sources. In Proceedings of SENSEVAL-3. pages 137–140. Qun Liu and Sujian Li. 2002. Word similarity computing based on how-net. CLCLP 7(2):59–76. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of ICLR. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech. volume 2, page 3. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39– 41. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2015. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proceedings of EMNLP. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. volume 14, pages 1532–43. Mohammad Taher Pilehvar and Nigel Collier. 2016. De-conflated semantic representations. In Proceedings of EMNLP. Sascha Rothe and Hinrich Sch¨utze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. Proceedings of ACL . David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1988. Learning representations by backpropagating errors. Cognitive modeling 5(3):1. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS. pages 3104–3112. Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilistic model for learning multi-prototype word embeddings. In Proceedings of COLING. pages 151–160. 2057 Fu Xianghua, Liu Guo, Guo Yanyan, and Wang Zhiqiang. 2013. Multi-aspect sentiment analysis for chinese online social reviews based on topic modeling and hownet lexicon. Knowledge-Based Systems 37:186–195. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of NIPS. pages 649–657. 2058
2017
187
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2059–2068 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1188 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2059–2068 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1188 Learning Character-level Compositionality with Visual Features Frederick Liu1, Han Lu1, Chieh Lo2, Graham Neubig1 1Language Technology Institute 2Electrical and Computer Engineering Carnegie Mellon University, Pittsburgh, PA 15213 {fliu1,hlu2,gneubig}@cs.cmu.edu [email protected] Abstract Previous work has modeled the compositionality of words by creating characterlevel models of meaning, reducing problems of sparsity for rare words. However, in many writing systems compositionality has an effect even on the character-level: the meaning of a character is derived by the sum of its parts. In this paper, we model this effect by creating embeddings for characters based on their visual characteristics, creating an image for the character and running it through a convolutional neural network to produce a visual character embedding. Experiments on a text classification task demonstrate that such model allows for better processing of instances with rare characters in languages such as Chinese, Japanese, and Korean. Additionally, qualitative analyses demonstrate that our proposed model learns to focus on the parts of characters that carry semantic content, resulting in embeddings that are coherent in visual space. 1 Introduction Compositionality—the fact that the meaning of a complex expression is determined by its structure and the meanings of its constituents—is a hallmark of every natural language (Frege and Austin, 1980; Szab´o, 2010). Recently, neural models have provided a powerful tool for learning how to compose words together into a meaning representation of whole sentences for many downstream tasks. This is done using models of various levels of sophistication, from simpler bag-of-words (Iyyer et al., 2015) and linear recurrent neural network (RNN) models (Sutskever et al., 2014; Kiros et al., 2015), to more sophisticated models using tree     Kalb Kälber a Do Do'(polite) Calf Calves         Laurel Whale Salmon Salmon gui jing gui gui (a) (b) (c) (d) han'da ham''ni'''da Figure 1: Examples of character-level compositionality in (a, b) Chinese, (c) Korean, and (d) German. The red part of the characters are shared, and affects the pronunciation (top) or meaning (bottom). structured (Socher et al., 2013) or convolutional networks (Kalchbrenner et al., 2014). In fact, a growing body of evidence shows that it is essential to look below the word-level and consider compositionality within words themselves. For example, several works have proposed models that represent words by composing together the characters into a representation of the word itself (Ling et al., 2015; Zhang et al., 2015; Dhingra et al., 2016). Additionally, for languages with productive word formation (such as agglutination and compounding), models calculating morphologysensitive word representations have been found effective (Luong et al., 2013; Botha and Blunsom, 2014). These models help to learn more robust representations for rare words by exploiting morphological patterns, as opposed to models that operate purely on the lexical level as the atomic units. For many languages, compositionality stops at the character-level: characters are atomic units of meaning or pronunciation in the language, and no further decomposition can be done.1 However, for other languages, character-level compositionality, where a character’s meaning or pronunciation can 1In English, for example, this is largely the case. 2059 Lang Geography Sports Arts Military Economics Transportation Chinese 32.4k 49.8k 50.4k 3.6k 82.5k 40.4k Japanese 18.6k 82.7k 84.1k 81.6k 80.9k 91.8k Korean 6k 580 5.74k 840 5.78k 1.68k Lang Medical Education Food Religion Agriculture Electronics Chinese 30.3k 66.2k 554 66.9k 89.5k 80.5k Japanese 66.5k 86.7k 20.2k 98.1k 97.4k 1.08k Korean 16.1k 4.71k 33 2.60k 1.51k 1.03k Table 1: By-category statistics for the Wikipedia dataset. Note that Food is the abbreviation for “Food and Culture” and Religion is the abbreviation for “Religion and Belief”. be derived from the sum of its parts, is very much a reality. Perhaps the most compelling example of compositionality of sub-character units can be found in logographic writing systems such as the Han and Kanji characters used in Chinese and Japanese, respectively.2 As shown on the left side of Fig. 1, each part of a Chinese character (called a “radical”) potentially contributes to the meaning (i.e., Fig. 1(a)) or pronunciation (i.e., Fig. 1(b)) of the overall character. This is similar to how English characters combine into the meaning or pronunciation of an English word. Even in languages with phonemic orthographies, where each character corresponds to a pronunciation instead of a meaning, there are cases where composition occurs. Fig. 1(c) and (d) show the examples of Korean and German, respectively, where morphological inflection can cause single characters to make changes where some but not all of the component parts are shared. In this paper, we investigate the feasibility of modeling the compositionality of characters in a way similar to how humans do: by visually observing the character and using the features of its shape to learn a representation encoding its meaning. Our method is relatively simple, and generalizable to a wide variety of languages: we first transform each character from its Unicode representation to a rendering of its shape as an image, then calculate a representation of the image using Convolutional Neural Networks (CNNs) (Cun et al., 1990). These features then serve as inputs to a down-stream processing task and trained in an end-to-end manner, which first calculates a loss function, then back-propagates the loss back to the CNN. 2Other prominent examples are largely for extinct languages: Egyptian hieroglyphics, Mayan glyphs, and Sumerian cuneiform scripts (Daniels and Bright, 1996). As demonstrated by our motivating examples in Fig. 1, in logographic languages character-level semantic or phonetic similarity is often indicated by visual cues; we conjecture that CNNs can appropriately model these visual patterns. Consequently, characters with similar visual appearances will be biased to have similar embeddings, allowing our model to handle rare characters effectively, just as character-level models have been effective for rare words. To evaluate our model’s ability to learn representations, particularly for rare characters, we perform experiments on a downstream task of classifying Wikipedia titles for three Asian languages: Chinese, Japanese, and Korean. We show that our proposed framework outperforms a baseline model that uses standard character embeddings for instances containing rare characters. A qualitative analysis of the characteristics of the learned embeddings of our model demonstrates that visually similar characters share similar embeddings. We also show that the learned representations are particularly effective under low-resource scenarios and complementary with standard character embeddings; combining the two representations through three different fusion methods (Snoek et al., 2005; Karpathy et al., 2014) leads to consistent improvements over the strongest baseline without visual features. 2 Dataset Before delving into the details of our model, we first describe a dataset we constructed to examine the ability of our model to capture the compositional characteristics of characters. Specifically, the dataset must satisfy two desiderata: (1) it must be necessary to fully utilize each character in the input in order to achieve high accuracy, and (2) there must be enough regularity and com2060 100 101 102 103 104 106 100 101 102 103 104 105 Rank Frequency ⎯⎯ Chinese ⎯⎯"Japanese ⎯⎯ Korean Rank < 20% Freq. > 80% Figure 2: The character rank-frequency distribution of the corpora we considered in this paper. All three languages have a long-tail distribution. positionality in the characters of the language. To satisfy these desiderata, we create a text classification dataset where the input is a Wikipedia article title in Chinese, Japanese, or Korean, and the output is the category to which the article belongs.3 This satisfies (1), because Wikipedia titles are short and thus each character in the title will be important to our decision about its category. It also satisfies (2), because Chinese, Japanese, and Korean have writing systems with large numbers of characters that decompose regularly as shown in Fig. 1. While this task in itself is novel, it is similar to previous work in named entity type inference using Wikipedia (Toral and Munoz, 2006; Kazama and Torisawa, 2007; Ratinov and Roth, 2009), which has proven useful for downstream named entity recognition systems. 2.1 Dataset Collection As the labels we would like to predict, we use 12 different main categories from the Wikipedia web page: Geography, Sports, Arts, Military, Economics, Transportation, Health Science, Education, Food Culture, Religion and Belief, Agriculture and Electronics. Wikipedia has a hierarchical structure, where each of these main categories has a number of subcategories, and each subcategory has its own subcategories, etc. We traverse this hierarchical structure, adding each main category tag to all of its descendants in this subcategory tree structure. In the case that a particular article is the descendant of multiple main categories, we favor the main category that minimizes the depth of the 3The link to the dataset and the crawling scripts – https://github.com/frederick0329/ Wikipedia_title_dataset Geography Sports Arts Military Economics Transportation Health Science Education Food Culture Religion and Belief Agriculture Electronics Visual model (Image as input) Lookup model (Symbol as input)  CNN CNN CNN    Softmax GRU 36 36   Figure 3: An illustration of two models, our proposed VISUAL model at the top and the baseline LOOKUP model at the bottom using the same RNN architecture. A string of characters (e.g. “温 病学”), each converted into a 36x36 image, serves as input of our VISUAL model. dc is the dimension of the character embedding for the LOOKUP model. article in the tree (e.g., if an article is two steps away from Sports and three steps away from Arts, it will receive the “Sports” label). We also perform some rudimentary filtering, removing pages that match the regular expression “.*:.*”, which catches special pages such as “title:agriculture”. 2.2 Statistics For Chinese, Japanese, and Korean, respectively, the number of articles is 593k/810k/46.6k, and the average length and standard deviation of the title is 6.25±3.96/8.60±5.58/6.10±3.71. As shown in Fig. 2, the character rank-frequency distributions of all three languages follows the 80/20 rule (Newman, 2005) (i.e., top 20% ranked characters that appear more than 80% of total frequencies), demonstrating that the characters in these languages belong to a long tail distribution. We further split the dataset into training, validation, and testing sets with a 6:2:2 ratio. The category distribution for each language can be seen in Tab. 1. Chinese has two varieties of characters, traditional and simplified, and the dataset is a mix of the two. Hence, we transform this dataset into two separate sets, one completely simplified and the other completely traditional using the Chinese text converter provided with Mac OS. 3 Model Our overall model for the classification task follows the encoder model by Sutskever et al. (2014). 2061 Layer# 3-layer CNN Configuration 1 Spatial Convolution (3, 3) →32 2 ReLu 3 MaxPool (2, 2) 4 Spatial Convolution (3, 3) →32 5 ReLu 6 MaxPool (2, 2) 7 Spatial Convolution (3, 3) →32 8 ReLu 9 Linear (800, 128) 10 ReLu 11 Linear (128, 128) 12 ReLu Table 2: Architecture of the CNN used in the experiments. All the convolutional layers have 32 3×3 filters. We calculate character representations, use a RNN to combine the character representations into a sentence representation, and then add a softmax layer after that to predict the probability for each class. As shown in Fig. 2.1, the baseline model, which we call it the LOOKUP model, calculates the representation for each character by looking it up in a character embedding matrix. Our proposed model, the VISUAL model instead learns the representation of each character from its visual appearance via CNN. LOOKUP model Given a character vocabulary C, for the LOOKUP model as in the bottom part of Fig. 2.1, the input to the network is a stream of characters c1, c2, ...cN, where cn ∈C. Each character is represented by a 1-of-|C| (one-hot) encoding. This one-hot vector is then multiplied by the lookup matrix TC ∈R|C|×dc, where dc is the dimension of the character embedding. The randomly initialized character embeddings were optimized with classification loss. VISUAL model The proposed method aims to learn a representation that includes image information, allowing for better parameter sharing among characters, particularly characters that are less common. Different from the LOOKUP model, each character is first transformed into a 36-by-36 image based on its Unicode encoding as shown in the upper part of Fig 2.1. We then pass the image through a CNN to get the embedding for the image. The parameters for the CNN are learned through backpropagation from the classification loss. Because we are training embeddings based on this classification loss, we expect that the CNN will focus on parts of the image that contain semantic information useful for category classification, a hypothesis that we examine in the experiments (see Section 5.5). In more detail, the specific structure of the CNN that we utilize consists of three convolution layers where each convolution layer is followed by the max pooling and ReLU nonlinear activation layers. The configurations of each layer are listed in Tab. 2. The output vector for the image embeddings also has size dc which is the same as the LOOKUP model. Encoder and Classifier For both the LOOKUP and the VISUAL models, we adopt an RNN encoder using Gated Recurrent Units (GRUs) (Chung et al., 2014). Each of the GRU units processes the character embeddings sequentially. At the end of the sequence, the incremental GRU computation results in a hidden state e embedding the sentence. The encoded sentence embedding is passed through a linear layer whose output is the same size as the number of classes. We use a softmax layer to compute the posterior class probabilities: P(y = j|e) = exp(wT j e + bj) PL i=1 exp(wT i e + bi) (1) To train the model, we use cross-entropy loss between predicted and true targets: J = 1 B B X i=1 L X j=1 −ti,j log(pi,j) (2) where ti,j ∈{0, 1} represents the ground truth label of the j-th class in the i-th Wikipedia page title. B is the batch size and L is the number of categories. 4 Fusion-based Models One thing to note is that the LOOKUP and the VISUAL models have their own advantages. The LOOKUP model learns embedding that captures the semantics of each character symbol without sharing information with each other. In contrast, the proposed VISUAL model directly learns embedding from visual information, which naturally shares information between visually similar characters. This characteristic gives the VISUAL 2062 Lookup/Visual 100% 50% 12.5% zh trad 0.55/0.54 0.53/0.50 0.48/0.47 zh simp 0.55/0.54 0.53/0.52 0.48/0.46 ja 0.42/0.39 0.47/0.45 0.44/0.41 ko 0.47/0.42 0.44/0.39 0.37/0.36 Table 3: The classification results of the LOOKUP / VISUAL models for different percentages of full training size. model the ability to generalize better to rare characters, but also has the potential disadvantage of introducing noise for characters with similar appearances but different meanings. With the complementary nature of these two models in mind, we further combine the two embeddings to achieve better performances. We adopt three fusion schemes, early fusion, late fusion (described by Snoek et al. (2005) and Karpathy et al. (2014)), and fallback fusion, a method specific to this paper. Early Fusion Early fusion works by concatenating the two varieties of embeddings before feeding them into the RNN. In order to ensure that the dimensions of the RNN are the same after concatenation, the concatenated vector is fed through a hidden layer to reduce the size from 2 × dc to dc. The whole model is then fine-tuned with training data. Late Fusion Instead of learning a joint representation like early fusion, late fusion averages the model predictions. Specifically, it takes the output of the softmax layers from both models and averages the probabilities to create a final distribution used to make the prediction. Fallback Fusion Our final fallback fusion method hypothesizes that our VISUAL model does better with instances which contain more rare characters. First, in order to quantify the overall rareness of an instance consisting of multiple characters, we calculate the average training set frequency of the characters therein. The fallback fusion method uses the VISUAL model to predict testing instances with average character frequency below or equal to a threshold (here we use 0.0 frequency as cutoff, which means all characters in the instance do not appear in the training set), and uses the LOOKUP model to predict the rest of the instances. 5 Experiments and Results In this section, we compare our proposed VISUAL model with the baseline LOOKUP model through three different sets of experiments. First, we examine whether our model is capable of classifying text and achieving similar performance as the baseline model. Next, we examine the hypothesis that our model will outperform the baseline model when dealing with low frequency characters. Finally, we examine the fusion methods described in Section 4. 5.1 Experimental Configurations The dimension of the embeddings and batch size for both models are set to dc = 128 and B = 400, respectively. We build our proposed model using Torch (Collobert et al., 2002), and use Adam (Kingma and Ba, 2014) with a learning rate η = 0.001 for stochastic optimization. The length of each instance is cut off or padded to 10 characters for batch training. 5.2 Comparison with the Baseline Model In this experiment, we examine whether our VISUAL model achieves similar performance with the baseline LOOKUP model in classification accuracy. The results in Tab. 3 show that the baseline model performs 1-2% better across four datasets; this is due to the fact that the LOOKUP model can directly learn character embeddings that capture the semantics of each character symbol for frequent characters. In contrast, the VISUAL model learns embeddings from visual information, which constraints characters that has similar appearance to have similar embeddings. This is an advantage for rare characters, but a disadvantage for high frequency characters because being similar in appearance does not always lead to similar semantics. To demonstrate that this is in fact the case, besides looking at the overall classification accuracy, we also examine the performance on classifying low frequency instances which are sorted according to the average training set frequency of the characters therein. Tab. 4 and Fig. 4 both show that our model performs better in the 100 lowest frequency instances (the intersection point of the two models). More specifically, take Fig. 4(a)’ as example, the solid (proposed) line is higher than the dashed (baseline) line up to 102, indicating that the proposed model outperforms the baseline for the 2063 10 1 10 2 10 3 10 0 10 1 10 2 10 3 10 2 10 3 10 0 10 1 10 2 10 3 10 2 10 3 10 0 10 1 10 2 10 3 10 2 10 3 10 0 10 1 10 2 10 3 Accumulated Number of Correctly Predicted Instances Rank (a) (b) (c) (d) ⎯⎯!Visual,(TP(=(100% ⎯⎯!Visual,(TP(=(50% ⎯⎯!Visual,(TP(=(12.5% ⎯!⎯!Lookup,(TP(=(100% ⎯!⎯!Lookup,(TP(=(50% ⎯!⎯!Lookup,(TP(=(12.5% ⎯⎯!Visual,(TP(=(100% ⎯⎯!Visual,(TP(=(50% ⎯⎯!Visual,(TP(=(12.5% ⎯!⎯!Lookup,(TP(=(100% ⎯!⎯!Lookup,(TP(=(50% ⎯!⎯!Lookup,(TP(=(12.5% ⎯⎯!Visual,(TP(=(100% ⎯⎯!Visual,(TP(=(50% ⎯⎯!Visual,(TP(=(12.5% ⎯!⎯!Lookup,(TP(=(100% ⎯!⎯!Lookup,(TP(=(50% ⎯!⎯!Lookup,(TP(=(12.5% ⎯⎯!Visual,(TP(=(100% ⎯⎯!Visual,(TP(=(50% ⎯⎯!Visual,(TP(=(12.5% ⎯!⎯!Lookup,(TP(=(100% ⎯!⎯!Lookup,(TP(=(50% ⎯!⎯!Lookup,(TP(=(12.5% Figure 4: Experiments on different training sizes for four different datasets. More specifically, we consider three different training data size percentages (TPs) (100%, 50%, and 12.5%) and four datasets: (a) traditional Chinese, (b) simplified Chinese, (c) Japanese, and (d) Korean. We calculate the accumulated number of correctly predicted instances for the VISUAL model (solid lines) and the LOOKUP model (dashed lines). This figure is a log-log plot, where x-axis shows rarity (rarest to the left), y-axis shows cumulative correctly classified instances up to this rank; a perfect classifier will result in a diagonal line. first 100 instances. Lines depart the x-axis when the model classifies its first instance correctly, and the LOOKUP model did not correctly classify any of the first 80 rarest instances, resulting in it crossing later than the proposed model. This confirms that the VISUAL model can share visual information among characters and help to classify low frequency instances. For training time, visual features take significantly more time, as expected. VISUAL is 30x slower than LOOKUP, although they are equivalent at test time. For space, images of Chinese characters took 36MB to store for 8985 characters. 5.3 Experiments on Different Training Sizes In our second experiment, we consider two smaller training sizes (i.e., 50% and 12.5% of the full training size) indicated by green and red lines in Fig. 4. We performed this experiment under the hypothesis that because the proposed method was more robust to infrequent characters, the proposed model may perform better in low-resourced scenarios. If this is the case, the intersection point of the two models will shift right because of the increase of the number of instances with low average character frequency. Lookup/Visual 100 1000 10000 zh trad 0.22/0.49 0.35/0.35 0.40/0.39 zh simp 0.25/0.53 0.39/0.37 0.41/0.40 ja 0.30/0.35 0.45/0.41 0.44/0.41 ko 0.44/0.33 0.44/0.33 0.48/0.42 Table 4: Classification results for the LOOKUP / VISUAL of the k lowest frequency instances across four datasets. The 100 lowest frequency instances for traditional and simplified Chinese and Korean were both significant (p-value < 0.05). Those for Japanese were not (p-value = 0.13); likely because there was less variety than Chinese and more data than Korean. As we can see in Fig. 4, the intersection point for 100% training data lies between the intersection point for 50% training data and 12.5%. This disagrees with our hypothesis; this is likely because while the number of low-frequency characters increases, smaller amounts of data also adversely impact the ability of CNN to learn useful visual features, and thus there is not a clear gain nor loss when using the proposed method. As a more extreme test of the ability of our proposed framework to deal with the unseen char2064 zh trad zh simp ja ko Lookup 0.5503 0.5543 0.4914 0.4765 Visual 0.5434 0.5403 0.4775 0.4207 early 0.5520 0.5546 0.4896 0.4796 late 0.5658 0.5685 0.5029 0.4869 fall 0.5507 0.5547 0.4914 0.4766 Table 5: Experiment results for three different fusion methods across 4 datasets. The late fusion model was better (p-value < 0.001) across four datasets. acters in the test set, we use traditional Chinese as our training data and simplified Chinese as our testing data. The model was able to achieve around 40% classification accuracy when we use the full training set, compared to 55%, which is achieved by the model trained on simplified Chinese. This result demonstrates that the model is able to transfer between similar scripts, similarly to how most Chinese speakers can guess the meaning of the text, even if it is written in the other script. 5.4 Experiment on Different Fusion Methods Results of different fusion methods can be found in Tab. 5. The results show that late fusion gives the best performance among all the fusion schemes combining the LOOKUP model and the proposed VISUAL model. Early fusion achieves small improvements for all languages except Japanese, where it displays a slight drop. Unsurprisingly, fallback fusion performs better than the LOOKUP model and the VISUAL model alone, since it directly targets the weakness of the LOOKUP model (e.g., rare characters) and replaces the results with the VISUAL model. These results show that simple integration, no matter which schemes we use, is beneficial, demonstrating that both methods are capturing complementary information. 5.5 Visualization of Character Embeddings Finally, we qualitatively examine what is learned by our proposed model in two ways. First, we visualize which parts of the image are most important to the VISUAL model’s embedding calculation. Second, we show the 6-nearest neighbor results for characters using both the LOOKUP and the VISUAL embeddings. Iron Bronze Salmon Serranidae Silk Coil Rhyme Pleased Wave Put on Cypress Pillar Cuckoo Eagle Mosquito Ant Figure 5: Examples of how much each part of the character contributes to its embedding (the darker the more). Two characters are shown per radical to emphasize that characters with same radical have similar patterns. Emphasis of the VISUAL Model In order to delve deeper into what the VISUAL model has learned, we measure a modified version of the occlusion sensitivity proposed by Zeiler and Fergus (2014) by masking the original character image in four ways, and examine the importance of each part of the character to the model’s calculated representations. Specifically, we leave only the upper half, bottom half, left half, or right half of the image, and mask the remainder with white pixels since Chinese characters are usually formed by combining two radicals vertically or horizontally. We run these four images forward through the CNN part of the model and calculate the L2 distance between the masked image embeddings with the full image embedding. The larger the distance, the more the masked part of the character contributes to the original embedding. The contribution of each part (e.g. the L2 distance) is represented as a heat map, and then it is normalized to adjust the opacity of the character strokes for better visualization. The value of each corner of the heatmap is calculated by adding the two L2 distances that contribute to this corner. The visualization is shown in Fig. 5. The meaning of each Chinese character in English is shown below the Chinese character. The opacity of the character strokes represent how much the corresponding parts contribute to the original embedding (the darker the more). In general, the darker part of the character is related to its semantics. For example, “金” means gold in Chinese, which is 2065                             ! ! ! ! ! !                                                Visual'model Lookup'model Visual'model Lookup'model Figure 6: Visualization of the Chinese traditional characters by finding the 6-nearest neighbors of the query (i.e., center) characters. The highlighted red indicates the radical along with the meaning of the characters. highlighted in both “鐵” (Iron) and “銅” (Bronze). We can also find similar results for other examples shown in Fig. 5. Fig. 5 also demonstrated that our model captures the compositionality of Chinese characters, both meaning of sub-character units and their structure (e.g. the semantic content tends to be structurally localized on one side of a Chinese character). K-nearest neighbors Finally, to illustrate the difference of the learned embeddings between the two models, we display 6-nearest neighbors (L2 distance) for selected characters in Fig. 6. As can be seen, the VISUAL embedding for characters with similar appearances are close to each other. In addition, similarity in the radical part indicates semantic similarity between the characters. For example, the characters with radical “鳥” all refer to different type of birds. The LOOKUP embedding do not show such feature, as it learns the embedding individually for each symbol and relies heavily on the training set and the task. In fact, the characters shown in Fig. 6 for the LOOKUP model do not exhibit semantic similarity either. There are two potential explanations for this: First, the category classification task that we utilized do not rely heavily on the finegrained semantics of each character, and thus the LOOKUP model was able to perform well without exactly capturing the semantics of each character precisely. Second, the Wikipedia dataset contains a large number of names and location and the characters therein might not have the same semantic meaning used in daily vocabulary. 6 Related Work Methods that utilize neural networks to learn distributed representations of words or characters have been widely developed. However, word2vec (Mikolov et al., 2013), for example, requires storing an extremely large table of vectors for all word types. For example, due to the size of word types in twitter tweets, work has been done to generate vector representations of tweets at character-level (Dhingra et al., 2016). There is also work done in understanding mathematical expressions with a convolutional network for text and layout recognition by using an attention-based neural machine translation system (Deng et al., 2016). They tested on realworld rendered mathematical expressions paired with LaTeX markup and show the system is effective at generating accurate markup. Other than that, there are several works that combine visual information with text in improving machine translation (Sutskever et al., 2014), visual question answering, caption generation (Xu et al., 2015), etc. These works extract image representations from a pre-trained CNN (Zhu et al., 2016; Wang et al., 2016). Unrelated to images, CNNs have also been used for text classification (Kim, 2014; Zhang et al., 2015). These models look at the sequential dependencies at the word or character-level and achieve the state-of-the-art results. These works inspire us to use CNN to extract features from image and serve as the input to the RNN. Our model is able to directly back-propagate the gradient all the way through the CNN, which generates visual embeddings, in a way such that the embedding can contain both semantic and visual information. Several techniques for reducing the rare words effects have been introduced in the literature, including spelling expansion (Habash, 2008), dictionary term expansion (Habash, 2008), proper name transliteration (Daum´e and Jagarlamudi, 2011), treating words as a sequence of characters (Luong and Manning, 2016), subword units (Sennrich et al., 2015), and reading text as bytes (Gillick et al., 2015). However, most of these techniques still have no mechanism for handling low frequency characters, which are the target of this work. Finally, there are works on improving embeddings with radicals, which explicitly splits Chinese characters into radicals based on a dictionary 2066 of what radicals are included in which characters (Li et al., 2015; Shi et al., 2015; Yin et al., 2016). The motivation of this method is similar to ours, but is only applicable to Chinese, in contrast to the method in this paper, which works on any language for which we can render text. 7 Conclusion and Future Work In this paper, we proposed a new framework that utilizes appearance of characters, convolutional neural networks, recurrent neural networks to learn embeddings that are compositional in the component parts of the characters. More specifically, we collected a Wikipedia dataset, which consists of short titles of three different languages and satisfies the compositionality in the characters of the language. Next, we proposed an end-to-end model that learns visual embeddings for characters using CNN and showed that the features extracted from the CNN include both visual and semantic information. Furthermore, we showed that our VISUAL model outperforms the LOOKUP baseline model in low frequency instances. Additionally, by examining the character embeddings visually, we found that our VISUAL model is able to learn visually related embeddings. In summary, we tackled the problem of rare characters by using embeddings learned from images. In the future, we hope to further generalize this method to other tasks such as pronunciation estimation, which can take advantage of the fact that pronunciation information is encoded in parts of the characters as demonstrated in Fig. 1, or machine translation, which could benefit from a wholistic view that considers both semantics and pronunciation. We also hope to apply the model to other languages with complicated compositional writing systems, potentially including historical texts such as hieroglyphics or cuneiform. Acknowledgments We thank Taylor Berg-Kirkpatrick, Adhiguna Kuncoro, Chen-Hsuan Lin, Wei-Cheng Chang, Wei-Ning Hsu and the anonymous reviewers for their enlightening comments and feedbacks. References Jan A Botha and Phil Blunsom. 2014. Compositional morphology for word representations and language modelling. In ICML. pages 1899–1907. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Ronan Collobert, Samy Bengio, and Johnny Marithoz. 2002. Torch: A modular machine learning software library. Y. Le Cun, B. Boser, J. S. Denker, R. E. Howard, W. Habbard, L. D. Jackel, and D. Henderson. 1990. Advances in neural information processing systems 2. pages 396–404. Peter T Daniels and William Bright. 1996. The world’s writing systems. Oxford University Press. Hal Daum´e and Jagadeesh Jagarlamudi. 2011. Domain adaptation for machine translation by mining unseen words. In ACL-HLT. pages 407–412. Yuntian Deng, Anssi Kanervisto, and Alexander M. Rush. 2016. What you get is what you see: A visual markup decompiler. arXiv preprint arXiv:1609.04938 . Bhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, and William W Cohen. 2016. Tweet2vec: Character-based distributed representations for social media. ACL . Gottlob Frege and John Langshaw Austin. 1980. The foundations of arithmetic: A logico-mathematical enquiry into the concept of number. Northwestern University Press. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language processing from bytes. arXiv preprint arXiv:1512.00103 . Nizar Habash. 2008. Four techniques for online handling of out-of-vocabulary words in Arabic-English statistical machine translation. In HLT-Short. pages 57–60. Mohit Iyyer, Varun Manjunatha, and Jordan L BoydGraber. 2015. Deep unordered composition rivals syntactic methods for text classification. In ACL. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. ACL pages 655–665. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. 2014. Large-scale video classification with convolutional neural networks. In CVPR. pages 1725–1732. Jun’ichi Kazama and Kentaro Torisawa. 2007. Exploiting Wikipedia as external knowledge for named entity recognition. In EMNLP-CoNLL. pages 698– 707. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP. pages 1746– 1751. 2067 Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In NIPS. pages 3294–3302. Yanran Li, Wenjie Li, Fei Sun, and Sujian Li. 2015. Component-enhanced chinese character embeddings. EMNLP pages 829–834. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In EMNLP. pages 1520– 1530. Minh-Thang Luong and Christopher D Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. ACL pages 1054–1063. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL. pages 104–113. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. pages 3111–3119. Mej Newman. 2005. Power laws, Pareto distributions and Zipf’s law. CONTEMP PHYS pages 323–351. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In CoNLL. pages 147–155. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. ACL pages 1715–1725. Xinlei Shi, Junjie Zhai, Xudong Yang, Zehua Xie, and Chao Liu. 2015. Radical embedding: Delving deeper to chinese radicals. In ACL. pages 594–598. Cees GM Snoek, Marcel Worring, and Arnold WM Smeulders. 2005. Early versus late fusion in semantic video analysis. In ACM MM. pages 399–402. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. pages 1631–1642. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS. pages 3104–3112. Zolt´an Gendler Szab´o. 2010. Compositionality. Stanford encyclopedia of philosophy . Antonio Toral and Rafael Munoz. 2006. A proposal to automatically build and maintain gazetteers for named entity recognition by using wikipedia. In EACL. pages 56–61. Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. 2016. Cnn-rnn: A unified framework for multi-label image classification. In CVPR. pages 2285–2294. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML. Rongchao Yin, Quan Wang, Rui Li, Peng Li, and Bin Wang. 2016. Multi-granularity chinese word embedding. EMNLP pages 981–986. Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In ECCV. Springer, pages 818–833. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS. pages 649–657. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li FeiFei. 2016. Visual7w: Grounded question answering in images. In CVPR. pages 4995–5004. 2068
2017
188
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2069–2077 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1189 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2069–2077 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1189 A Progressive Learning Approach to Chinese SRL Using Heterogeneous Data Qiaolin Xia†, Lei Sha†, Baobao Chang† and Zhifang Sui†⋆ †Key Laboratory of Computational Linguistics (Ministry of Education), School of EECS, Peking University, 100871, Beijing, China ⋆Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing, China {xql,shalei,chbb,szf}@pku.edu.cn Abstract Previous studies on Chinese semantic role labeling (SRL) have concentrated on a single semantically annotated corpus. But the training data of single corpus is often limited. Whereas the other existing semantically annotated corpora for Chinese SRL are scattered across different annotation frameworks. But still, Data sparsity remains a bottleneck. This situation calls for larger training datasets, or effective approaches which can take advantage of highly heterogeneous data. In this paper, we focus mainly on the latter, that is, to improve Chinese SRL by using heterogeneous corpora together. We propose a novel progressive learning model which augments the Progressive Neural Network with Gated Recurrent Adapters. The model can accommodate heterogeneous inputs and effectively transfer knowledge between them. We also release a new corpus, Chinese SemBank, for Chinese SRL1. Experiments on CPB 1.0 show that our model outperforms state-of-the-art methods. 1 Introduction Semantic role labeling (SRL) is one of the fundamental tasks in natural language processing because of its important role in information extraction (Bastianelli et al., 2013), statistical machine translation (Aziz et al., 2016; Xiong et al., 2012), and so on. However, state-of-the-art performance of Chinese SRL is still far from satisfactory. And data sparsity has been a bottleneck which can not be 1http://www.klcl.pku.edu.cn/ShowNews.aspx?id=156 Predicate given: 修改 revise (a) [ ArgM-TMP在这期间] Meanwhile , [Arg0全国人大常委会] the NPC Standing Committee ... 广泛 widely 征求 solicit 意见, opinions, [ArgM-ADV多次 for many times ] [ArgM-ADV反复 repeatedly ] [Rel修改 revise ] [Arg1*pro* (omitted) . ] 。 (b) [agent他们] They 对 to [patient系统 system ]进行了 made [Rel修改 revise . ] 。 Figure 1: Sentences from (a) CPB and (b) our heterogeneous dataset. In CPB, each predicate (e.g., 修改) has a specific set of core roles given with numbers (e.g., Arg0). While our dataset uses a different semantic role set, and all roles are nonpredicate-specific. ignored. For English, the most commonly used benchmark dataset PropBank (Xue and Palmer, 2003) has about 54,900 sentences. But for Chinese, there are only 10,364 sentences in Chinese PropBank 1.0 (CPB) (with about 35,700 propositions) (Xue, 2008). To mitigate the data sparsity, models incorporating heterogeneous resources have been introduced to improve Chinese SRL performance (Wang et al., 2015; Guo et al., 2016; Li et al., 2016). The heterogeneous resources introduced by these models include other semantically annotated corpora with annotation schema different to that used in PropBank, and even of a different language. The challenge here lies in the fact that those newly introduced resources are heterogeneous in nature, without sharing the same tagging schema, semantic role set, syntactic tag set and domain. For example, Wang et al. (2015) introduced a heterogeneous dataset, Chinese NetBank, by pretraining word embeddings. Specifically, they learn an LSTM RNN model based on NetBank first, then initialize a new model with the 2069 pretrained embeddings obtained from NetBank, and then train it on CPB. Chinese NetBank (Yulin, 2007) is also a corpus annotated with semantic roles, but using a very different role set and annotation schema. Wang’s method can inherit knowledge acquired from other resources conveniently, but only at word representation level, missing more generalized semantic meanings in higher hidden layers. Li (2016) proposed a twopass training approach to use corpora of two languages, but a few non-common roles are ignored in the first pass. Guo et al. (2016) proposed a unified neural network model for SRL and relation classification (RC). It can learn two tasks at the same time, but cannot filter out harmful features learned in incompatible tasks. Recently, Progressive Neural Networks (PNN) model was proposed by Rusu et al. (2016) to transfer learned reinforcement learning policies from one game to another, or from simulation to the real robot. PNN “freezes” learned parameters once starting to learn a new task, and it uses lateral connections, namely adapter, to access previously learned features. Inspired by the PNN model, we propose a progressive learning model to Chinese semantic role labeling in this paper. Especially, we extend the model with Gated Recurrent Adapters (GRA). Since the standard PNN takes pixels as input, policies as output, it is not suitable for SRL task we focus in this context. Moreover, to handle long sentences in the corpus, we enhance adapters with internal memories, and gates to keep the gradient stable. The contributions of this paper are threefold: 1. We reconstruct PNN columns with bidirectional LSTMs to introduce heterogeneous corpora to improve Chinese SRL. The architecture can also be applied to a wider range of NLP tasks, like event extraction and relation classification, etc. 2. We further extend the model with GRA to remember and take advantage of what has been transferred, thus improve the performance on long sentences. 3. We also release a new corpus, Chinese SemBank, which was annotated with the schema different to that used in CPB. We hope that it will be helpful for future work on SRL tasks. Subjective roles: agent(施事), co-agent(同事), experiencer(当事) , indirect experiencer(接事) Objective roles: patient(受事), relative(系事), dative(与事) , result(结果), content(内容), target(对象) Space roles: a point of departure(起点) , a point of arrival(终点) , path(路径), direction(方向), location(处所) Time roles: start time(起始), end time(结束), time point(时点) , duration(时段) Comparison roles: comparison subject(比较 主体), comparison object(比较对象) , comparison range(比较范围), comparison thing(比较项目) , comparison result(比较结 果) Others: instrument(工具) , material(材料) , manner(方式) , quantity(物量) , range(范围) , reason(原因) , purpose(目的) Table 1: Semantic roles in Chinese SemBank We use our new corpus as a heterogeneous resource, and evaluate the proposed model on the benchmark dataset CPB 1.0. The experiment shows that our approach achieves 79.67% F1 score, significantly outperforms existing state-ofthe-art systems by a large margin (Section 5). 2 Heterogeneous Corpora for Chinese SRL In this paper, we provide a new SRL corpus Chinese SemBank (CSB) and use it as an example of heterogeneous data in our experiments. In this section, we first briefly introduce the corpus, then compare it to existing corpora. Sentences in CSB are from various sources including online articles and news. The vision of this project is to build a very large and complete Chinese semantic corpus in the future. Currently, it only focuses on the predicate-argument structures in a sentence without annotation of the temporal relations and coreference. CBS is different with respect to commonly used dataset CPB in the following aspects: • In terms of predicate, CSB takes wider range of predicates into account. We not only annotated common verbs, but also nominal verbs, as NomBank does, and state words. Whereas 2070 CPB only annotate common verbs as predicates. • In terms of semantic roles, CSB has a more fine-grained semantic role set. There are 31 roles defined in five types (as Table. 1 shows). Whereas in CPB, there are totally 23 roles, including core roles and non-core roles. • CSB does not have any pre-defined frames for predicates because all roles are set to be non-predicate-specific. The reason for not defining frames is that frames may lead inconsistencies in labels. For example, according to Chinese verb formation theory (Sun et al., 2009), in CPB, an agent of a verb is often marked as its Arg0, but not all Arg0 are agents. Therefore, roles are defined for predicates with similar syntactic and semantic regularities, rather than single predicate. Two direct benefits of using stand-alone nonpredicate-specific roles are: First, meanings of all semantic roles can be directly inferred from their labels. For instance, roles of things that people are telling (谈) or looking (看) are labeled as 内 容/content, because verbs like 谈and 看are often followed by an object. Second, we can easily annotate sentences with new predicates without defining new frame files. Other Corpora for Chinese SRL Other popular semantic role labeling corpora include Chinese NomBank (Xue, 2006), Peking University Chinese NetBank (Yulin, 2007). NomBank, often used as a complement to PropBank, annotates nominal predicates and semantic roles according to the similar semantic schema as PropBank does. Peking University Chinese NetBank was created by adding a semantic layer to Peking University Chinese TreeBank (Zhou et al., 1997). It only uses non-predicate-specific roles as we do. And its role set is smaller, which has 20 roles. 3 Challenges in Inheriting Knowledge from Heterogeneous Corpora Although there are a lot of annotated corpora for Chinese SRL as we mentioned in the previous section, most of them are quite small as compared to that in English. Data sparsity remains a bottleneck. This situation calls for larger training dataset, or effective approaches which can take advantage of very heterogeneous datasets. In this paper, we focus on the second problem, that is, to improve Chinese SRL by using heterogeneous corpora together within one model. We will consider the combination of the standard benchmark, CPB 1.0 dataset (Xue and Palmer, 2003), with the new corpus, CSB, because there are a lot of differences between them, as we discussed in Section 2. Consequently, a number of challenges arise for this task. Now we describe them as below. Inheriting from Different Schema and Role Sets. CPB was annotated with PropBank-style frames and roles, whereas Chinese FrameNet uses its own frames and roles. And our dataset has no frame files and use different role set. Therefore, it is hard to find explicit mapping or hierarchical relationships among their role sets, or decide which system is better, especially when there are more than two resources. Inheriting from Different Domain/Genre. The datasets mentioned above are composed of sentences from various sources, including news and stories, etc. However, it is well known that adding data in very different genre to training data may hurt parser performance (Bikel, 2004). Therefore, we also need to deal with domain adaptation problem when using heterogeneous data. In other words, the proposed approach should be robust to harmful features learned on incompatible datasets. It can also accommodate potentially different model structures and inputs in the procedure of knowledge fusion. Inheriting from Different Syntactic Annotation. Unlikes English, previous works (Ding and Chang, 2009; Sun et al., 2009) on Chinese SRL task often use both correct segmentation and part-of-speech tagging, and even treebank goldstandard parses (Xue, 2008) as their features. But some corpora like CPB and NetBank do not share the same PoS tag set, or do not have correct PoS tagging and gold treebank parses at all, like CSB. And in real application scenarios, it is more convenient to use automatic PoS tagging instead of goldstandard tagging on large datasets, as they can be obtained quickly. So to deal with the absence of syntactic features, we adopt automatic PoS tagging when training on CSB in this work. Some previous techniques, such as finetuning after pretraining (Wang et al., 2015; Li et al., 2016) and multi-task learning (Guo et al., 2016), have 2071 h(1) 1 h(1) 2 h(2) 1 h(2) 2 input output1 output2 σ σ (a) h(1) 1 h(1) 2 h(2) 1 h(2) 2 input output1 output2 GRA GRA Heter. 2 Target h(1) 1 h(1) 2 output1 GRA GRA Heter. 1 (b) Figure 2: Depiction of the standard Progressive Neural Network architecture (a) and ours PNN GRA model (b). Our model uses Gated Recurrent Adapters (GRA), instead of sigmoid adapters to access previous knowledge in previous columns learned on heterogeneous data. If there are more than one heterogeneous resources available, more columns can be added on the left. been used to deal with these challenges. Though they can also leverage knowledge from different domains, they have following drawbacks: finetuning cannot avoid catastrophic forgetting because learned parameters, whether embeddings or other hidden weights, will be tuned after the model has been initialized; And multi-task learning cannot ignore previously learned harmful features because some features are learned in shared layers, although it avoids forgetting by randomly selecting a task to learn at each iteration. Therefore, to solve the above-mentioned challenges, we further introduce progressive learning which we believe is more suitable for the task. 4 Progressive Learning Approach We propose a progressive learning approach which is ideal for combining heterogeneous SRL data for multiple reasons. First, it can accommodate dissimilar inputs with different schema, syntactic information and domain, because it allow models for heterogeneous resources to be extremely different, such as different network structures, different width, and different learning rates, etc. Second, it is immune to forgetting by freezing learned weights and can leverage prior knowledge via lateral connections. Third, the lateral connections can be extended with recurrent structure and gate mechanism to handle with forgetting problem over long distance. Our model is mainly inspired by Rusu et al. (2016). They proposed progressive neural networks for a wide variety of reinforcement learning tasks (e.g. Atari games and robot simulation). In their cases, inputs are pixels, outputs are learned policies. And each column, consisting of simple layers and convolutional layers, is trained to solve a particular Markov Decision Process. But in our case, inputs are sentences annotated using different syntactic tagsets and outputs are semantic role sequences. So we change the structure of columns to recurrent neural networks with LSTM, similar to the model proposed by Wang et al. (2015). Below we first introduce basic progressive neural network architecture, then describe our model, PNN with gated recurrent adapters. 4.1 Progressive Neural Networks Fig. 2a is an illustration of the basic progressive neural network model. It starts with single column (a neural network), in which there are L hidden layers and the output for ith layer (i ≤L) with ni units is h1 i ∈Rni. Θ1 denotes the parameters to be learned in the first column. When switching to a second corpus, it "freezes" the parameter Θ1 and randomly initialize a new column with parameters Θ2 and several lateral connections between two columns so that layer h2 i can receive input from both h2 i−1 and h1 i−1. In this straightforward manner, progressive neural networks can make use of columns with any structures or to compile lateral connections in an ensemble setting. To be more general, we calculate the output of ith layer in kth column hk i by: hk i = f(W k i hk i−1 + X j<k U (k:j) i hj i−1) (1) where W k i ∈Rnk i ×nk i−1 is the weight matrix of layer i of column k, U (k:j) i ∈Rnk i ×nj i−1 are the lateral connections to transfer information from layer i −1 of column j to layer i of column k, h0 is the input of the network. f can be any activation function, such as element-wise non-linearity. Bias term was omitted in the equation. Adapters. With implicit assumption that there is some "overlap" between the first task and the second task, pretrain-and-finetune learning paradigm is effective, as only slight adjustment to parameters is needed to learn new features. Progressive networks also have ability to transfer knowledge from previous tasks to improve convergence 2072 … … !"#$%&'() *+#,-%.%,. /0#1,(%2%3 h(2) 3 Nonlinear Layer Word Representation Bidirectional LSTM RNN h(2) 4 h(2) 5 h(1) 3 h(1) 4 h(1) 5 Nonlinear Layer Linear Layer GRA GRA GRA Sentence !"#$%&'(#)* P(path|x) … … Column 2 Column 1 Output h(1) 1 h(1) 2 h(2) 1 h(2) 2 GRA c o f i Figure 3: Each column is a stacked bidirectional LSTM RNN model. Two columns are connected by GRAs. There are three gates in each GRA: gi, gf, and go. The input gate gi and the forget gate gf can also be coupled as one uniform gate, that is gi = 1 −gf. speed. On the one hand, the model reuse previously learned features from left columns via lateral connections (i.e., adapters). On the other hand, new features can be learned by adding more columns incrementally. Moreover, when the "overlap" between two tasks is small, lateral connections can filter out harmful features by sigmoid functions. So in practice, the output of adapters can also be calculated by a(k:j) i = σ(A(k:j) i αj i−1hj i−1) (2) where A(k:j) i is a matrix to be learned. We treat Equation 2 as one of baseline settings in experiments. 4.2 PNN with Gated Recurrent Adapter for Chinese SRL We reconstruct PNN with bidirectional LSTM to solve SRL problems. Our model is illustrated in Fig. 3. First, each column in the PNN architecture is a stacked bidirectional LSTM RNN, rather than convolutional neural networks, because inputs are sentences not pixels, and bi-LSTM RNN has proved powerful for Chinese SRL (Wang et al., 2015). Second, we enhance the adapter with recurrent structure and gate mechanism, because the simple Multi-Layer Perceptron (MLP) adapters have a limitation: their weights are learned word after word independently. For tasks like transferring reinforcement learning policies, this is enough because there are little dependencies among actions. But in NLP domain, things are different. Therefore, we add internal memory to adapters to help them remember what has been inherited from heterogeneous resource. Third, to keep gradient stable and balance between long-term and short-term memory, we introduce gate mechanism which has been widely used in RNN models. Intuitively, we call the new adapter Gated Recurrent Adapter (GRA). Formally, let h(<k) i−1 = [h1 i−1, ..., hj i−1, ..., hk−1 i−1 ] be the outputs of i −1 layers from the first column to the (k −1)th column. The dimensionality of them is n(<k) i−1 = [n1 i−1, ..., nk−1 i−1 ]. a(<k) is the outputs of k −1 adapters with dimension m(<k) = [m1, ..., mk−1]. The output vector is multiplied by a learned matrix Wa initialized by random small values before going to GRAs. Its role is to adjust for the different scales of the different inputs and reduce the dimensionality. Formally, the candidate outputs is ât = f(W j ahj t + U j aaj t−1) (3) where at−1 is the output of the adapter at the previous time-step. Ua is a weight matrix to learn. The output of an adapter aj t of layer i at time t can be formalized as follows, gi =σ(W j i hj t + U j i aj t−1) (4) gf =σ(W j f hj t + U j faj t−1) (5) go =σ(W j o hj t + U j oaj t−1) (6) ãt =gi ⊙ât + gf ⊙ãj t−1 (7) at =go ⊙f(ãt−1) (8) where hj ∈Rmj i−1×nj i−1 is the outputs of previous layers, Wf, Wo, Wa ∈Rmi−1×ni−1, Uf, Uo, Ua ∈ Rmi−1×di−1 are parameters to learn. di−1 is the dimension of the inner memory in adapters. ãt represents the inner state of the adapter. f is an activation function, like tanh. The input gate and the forget gate can be coupled as a uniform gate, that is gi = 1 −gf to alleviate the problem of information redundancy and reduce the possibility of overfitting (Greff et al., 2015). Finally, we calculate the output of the next layer i of column k by hk i = f(W k i concat[a(<k), hk i−1]) (9) 2073 where Wi ∈Rn(k) i ×P m(<k) i−1 is the parameters in ith layer. 4.3 Training Criteria We adopt the sentence tagging approach as Wang et al. (2015) did, because words in a sentence may closely be related with each other, independently labeling each word is inappropriate. Sentence tagging approach only consider valid transition paths of tags when calculating the cost. For example, when using IOBES tagging schema, tag transition from I-Arg0 to B-Arg0 is invalid, and transition from I-Arg0 to I-Arg1 is also invalid because the type of the role changed inside the semantic chunk. For each task (column), the log likelihood of sentence x and its correct path y is log p(y|x, Θ) = log exp PN t ot,yt P z exp PNi t ot,zt (10) where N is the number of words, ot ∈RM is the output of the last layer at time t. yt = k means the tth word has the kth semantic role label. z ranges from all the valid paths of tags. The negative log likelihood of the whole training set D is J(Θ) = X (x,y)∈D log p(y|x, Θ) (11) We minimize J(Θ) using stochastic gradient descent to learn network parameters Θ. When testing, the best prediction of a sentence can be found using Viterbi algorithm. 5 Experiments 5.1 Experiment Settings To compare our approach with others, we designed four experimental setups: (1) A simple LSTM setup on CSB and CPB with automatic PoS tagging. Since CPB is about two times as large as the new corpus, we need to know whether CSB can be used for training good semantic parsers and how much information can be learned from CSB by machine. So we conduct this experiment to provide two baselines for CSB and CPB respectively. In this setup we train and evaluate a one-column LSTM model on CSB. (2) A simple LSTM setup on CPB with pretrained word embedding on CSB (marked as biLSTM+CSB embedding). Previous work found that using pretrained word embeddings can improve performance (Wang et al., 2015) on Chinese SRL. So we conduct this experiment to compare with the method using embeddings trained on large-scale unlabeled data like Gigaword 2, and NetBank. (3) A two-column finetuning setup where we pretrain the first column on CSB and finetune both two columns on CPB. Clearly, finetuning is a traditional method for continual learning scenarios. But the disadvantage of it is that learned features will be gradually forgotten when the model is adapting new tasks. To assess this empirically, we design this experiment. The model uses the same network structure as PNN does, but it does not "freeze" parameters in the first column when tuning two columns. (4) A progressive network setup where we train column 1 on CSB, then train column 2 and adapters on CPB. We conduct this experiment to evaluate the proposed model and compare it to all previous methods. To further analyze effectiveness of the new adapter structure, we also conduct an experiment for progressive nets with GRA. We apply grid-search technique to explore hyper-parameters including learning rates and width of layers. Preprocessing. We follow the same data setting as previous work (Xue, 2008; Sun et al., 2009), which divided CPB dataset3 into three parts: 648 files, from chtb_081.fid to chtb_899.fid, are the training set; 40 files, from chtb_041.fid to chtb_080.fid, are the development set; 72 files, from chtb_001.fid to chtb_040.fid, and chtb_900.fid to chtb_931.fid, are used as the test set. We also divide shuffled CSB corpus into three sets with similar partition ratios. Currently, there are 10634 sentences in CSB. So 8900 samples are used as training set, 500 samples as development set and the rest 965 samples as test set. We use Stanford Parser4 for PoS tagging. 5.2 Results Performance on Chinese SemBank Table 2 gives the results of Experiment 1. We see that precision on CPB with automatic PoS tagging is 2https://code.google.com/p/word2vec/ 3https://catalog.ldc.upenn.edu/LDC2005T23 4http://nlp.stanford.edu/software/lex-parser.shtml 2074 Corpus Pr.(%) Rec.(%) F1(%) 1. CSB 75.80 73.45 74.61 2. CPB 76.75 73.03 74.84 Table 2: Results of Chinese SRL tested on CPB and CSB with automatic PoS tagging, using standard LSTM RNN model (Experiment 1). 0.689 0.729 0.769 0.809 [0, 20) [20, 40) [40, 60) [60, 80) [80, 100) F1 sentence length PNN PNN with GRA Figure 4: Performance of PNN models with and without GRAs over sentence length. For sentences shorter than 40 words, there is no big difference. But for longer sentences (≥40 words), PNN with GRA model performs significantly better. about 0.9 percentage point higher than that on CSB, while recall is about 0.4 percentage point lower, and the gap between F1 scores on CPB and CSB is not significant, which is only about 0.3 percentage point, although the size of CSB is smaller. We can explain this by two reasons. First, CSB does not have predicate-specific roles which may lead to inconsistency, as we explained in Section 3. Thus, it might be easier to learn by machine. Second, there are underlying similarities between them: both of them annotate predicateargument structures. So when there is sufficient training data, difference between scores on testing sets is not very likely to be huge. Overall, the results indicated that the new annotated corpus CSB is not a bad choice for training semantic parser even when this does not involve larger training sets. Compare to Methods without Using Heterogeneous Data Table 3 summarizes the SRL performance of previous benchmark methods and our experiments described above. Collobert and Weston only conducted their experiments on English corpus, but we notice that their approach has been implemented and tested on CPB by Wang et al. (2015), so we also put their result here for comparison. We can make several observations from these results. Our approach significantly outperforms Sha et al. (2016) by a large margin (Wilcoxon Signed Rank Test, p < 0.05), even without using GRA. This result can prove the ability of our model to capture underlying similarities between heterogeneous SRL resources. Compare to Methods Using Heterogeneous Resources The results of methods using external language resources are also presented in Table 3. Not surprisingly, we see that the overall best F1 score, 79.67%, is achieved by the progressive nets with the GRAs. Furthermore, as shown in Fig. 4, PNN with GRA performs better on longer sentences, which is consistent with our expectation. Without GRA, the F1 drops 0.37% percentage point to 79.30, confirming that gated recurrent adapter structure is more suitable for our task because it can remember what has been transferred in previous time steps. Compared to progressive learning methods, finetuning method does not perform well even with the same network structure (Two-column finetuning), but it is still better than simply pretraining word embeddings (bi-LSTM+CSB embedding). This confirms the effectiveness of multicolumn learning structure which add capacity to the model by adding new columns. Therefore, as can be seen, our PNN model achieves 79.30% F1 score, outperforming finetuning by 0.88% percentage point, and pretraining embeddings by even larger margin. To sum up, not only network structures but also learning methods (finetuning/multitask/progressive) can influence the performance of knowledge transfer. According to the results, our PNN approach is more effective than others because it is immune to forgetting and robust to harmful features, and GRA is more suitable for our task than simple adapters. 6 Related Work 6.1 Chinese Semantic Role Labeling The concept of Semantic Role Labeling is first proposed by Gildea and Jurafsky(2002). Previous work on Chinese SRL mainly focused on how to improve SRL on single corpus. Approaches falls into two categories: feature-based machine learning approaches and neural-network-based approaches. Using feature-based method, Sun and Jurafsky (2004) did the preliminary work and achieved promising results without using any large 2075 Method F1(%) Xue (2008) ME 71.90 Collobert and Weston (2008) MTL 74.05 Ding and Chang (2009) CRF 72.64 Yang et al. (2014) Multi-Predicate 75.31 Wang et al. (2015) bi-LSTM 77.09 (+0.00) Sha et al. (2016) bi-LSTM+QOM 77.69 With external language resources Wang et al. (2015) +Gigaword embedding 77.21 Wang et al. (2015) +NetBank embedding 77.59 Guo et al. (2016) +Relataion Classification 75.46 With CSB corpus bi-LSTM+CSB embedding 77.68 (+0.59) Two-column finetuning 78.42 (+1.33) Two-column progressive(ours) 79.30 (+2.21) Two-column Progressive+GRA(ours) 79.67 (+2.58) Table 3: Result comparison on CPB dataset. Compared to learning with single corpus using bi-LSTM model (77.09%), learning with CSB can improve the performance by at list 0.59%. Also the best score (79.67%) was achieved by the PNN GRA model. annotated corpus. After CPB was built by Xue and Palmer (2003), more complete and systematic research on Chinese SRL were done (Xue and Palmer, 2005; Chen et al., 2006; Ding and Chang, 2009; Yang et al., 2014). Neural network methods do not rely on handcrafted features. For Chinese SRL, Wang et al. (2015) proposed bidirectional a LSTM RNN model. And based on their work, Sha (2016) proposed quadratic optimization method as a postprocessing module and further improved the result. 6.2 Learning with Heterogeneous Data In this paper, we mainly focus on learning with heterogeneous semantic resource for Chinese SRL. Wang et al. (2015) introduced heterogeneous data by using pretrained embeddings at initialization and achieved promising results. Guo et al. (2016) proposed a multitask learning method with a unified neural network model to learn SRL and relation classification task together and also achieved improvement. Different from previous work, we proposed a progressive neural network model with gated recurrent adapters to leverage knowledge from heterogeneous semantic data. Compared with previous methods, this approach is more constructive, rather than destructive, because it uses lateral connections to access previously learned features which are fixed when learning new tasks. And by introducing gated recurrent adapters, we further enhance our model to deal with long sentences and achieve state-of-the-art performance on Chinese PropBank. 7 Conclusion and Future Work In this paper, we proposed a progressive neural network model with gated recurrent adapters to leverage heterogeneous corpus for Chinese SRL. Unlike previous methods like finetuning, ours leverage prior knowledge via lateral connections. Experiments have shown that our model yields better performance on CPB than all baseline models. Moreover, we proposed novel gated recurrent adapter to handle transfer on long sentences, The experiment has proved the effectiveness of the new adapter structure. We believe that progressive learning with heterogeneous data is a promising avenue to pursue. So in the future, we might try to combine more heterogeneous semantic data for other tasks like event extraction and relation classification, etc. We also release the new corpus Chinese SemBank for Chinese SRL. We hope that it will be helpful in providing common benchmarks for future work on Chinese SRL tasks. 2076 Acknowledgments This paper is supported by NSFC project 61375074, National Key Basic Research Program of China 2014CB340504 and Beijing Advanced Innovation Center for Imaging Technology BAICIT-2016016. The contact authors of this paper are Baobao Chang and Zhifang Sui. References Wilker Aziz, Miguel Rios, and Lucia Specia. 2016. Shallow semantic trees for smt. In Proc. of the 6th Workshop on Statistical Machine Translation. Edinburgh, Scotland, pages 316–322. Emanuele Bastianelli, Giuseppe Castellucci, Danilo Croce, and Roberto Basili. 2013. Textual inference and meaning representation in human robot interaction. In In Proceedings of the Joint Symposium on Semantic Processing. Textual Inference and Structures in Corpora. pages 65–69. Daniel M Bikel. 2004. On the parameter space of generative lexicalized statistical parsing models. Ph.D. thesis, Citeseer. Wenliang Chen, Yujie Zhang, and Hitoshi Isahara. 2006. An empirical study of chinese chunking. In Proceedings of the COLING/ACL on Main conference poster sessions. Association for Computational Linguistics, pages 97–104. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning. ACM, pages 160–167. Weiwei Ding and Baobao Chang. 2009. Word based chinese semantic role labeling with semantic chunking. International Journal of Computer Processing Of Languages 22(02n03):133–154. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational linguistics 28(3):245–288. Klaus Greff, Rupesh Kumar Srivastava, Jan Koutník, Bas R Steunebrink, and Jürgen Schmidhuber. 2015. Lstm: A search space odyssey. arXiv preprint arXiv:1503.04069 . Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu, and Jun Xu. 2016. A unified architecture for semantic role labeling and relation classification. In Proc. of the 26th International Conference on Computational Linguistics (COLING). Tianshi Li, Qi Li, and BaoBao Chang. 2016. Improving chinese semantic role labeling with english proposition bank. In China National Conference on Chinese Computational Linguistics. Springer, pages 3–11. Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. CoRR abs/1606.04671. Lei Sha, Tingsong Jiang, Sujian Li, Baobao Chang, and Zhifang Sui. 2016. Capturing argument relationships for chinese semantic role labeling . Honglin Sun and Daniel Jurafsky. 2004. Shallow semantic parsing of chinese. In Proceedings of NAACL 2004. pages 249–256. Weiwei Sun, Zhifang Sui, Meng Wang, and Xin Wang. 2009. Chinese semantic role labeling with shallow parsing. In Proceedings of the 2009 EMNLP. Association for Computational Linguistics, pages 1475– 1483. Zhen Wang, Tingsong Jiang, Baobao Chang, and Zhifang Sui. 2015. Chinese semantic role labeling with bidirectional recurrent neural networks. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1626–1631. Deyi Xiong, Min Zhang, and Haizhou Li. 2012. Modeling the translation of predicate-argument structure for smt. In In Proc. of the 50th Annual Meeting of the Association for Computational Linguistics. pages 902–911. Nianwen Xue. 2006. Annotating the predicateargument structure of chinese nominalizations. In Proceedings of the fifth international conference on Language Resources and Evaluation. pages 1382– 1387. Nianwen Xue. 2008. Labeling chinese predicates with semantic roles. Computational linguistics 34(2):225–255. Nianwen Xue and Martha Palmer. 2003. Annotating the propositions in the penn chinese treebank. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17. Association for Computational Linguistics, pages 47–54. Nianwen Xue and Martha Palmer. 2005. Automatic semantic role labeling for chinese verbs. In In Proceedings of the 19th International Joint Conference on Artificial Intelligence. pages 1160–1165. Haitong Yang, Chengqing Zong, et al. 2014. Multipredicate semantic role labeling. In EMNLP. pages 363–373. Yuan Yulin. 2007. The fineness hierarchy of semantic roles and its application in nlp. Journal of Chinese Information Processing 21(4):10–20. Qiang Zhou, Wei Zhang, and Shiwen Yu. 1997. Building a chinese treebank. Journal of Chinese Information Processing 4. 2077
2017
189
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 199–208 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1019 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 199–208 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1019 Generating Natural Answers by Incorporating Copying and Retrieving Mechanisms in Sequence-to-Sequence Learning Shizhu He1, Cao Liu1,2, Kang Liu1 and Jun Zhao1,2 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China 2 University of Chinese Academy of Sciences, Beijing, 100049, China {shizhu.he, cao.liu, kliu, jzhao}@nlpr.ia.ac.cn Abstract Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions. 1 Introduction Question answering (QA) systems devote to providing exact answers, often in the form of phrases and entities for natural language questions (Woods, 1977; Ferrucci et al., 2010; Lopez et al., 2011; Yih et al., 2015), which mainly focus on analyzing questions, retrieving related facts from text snippets or knowledge bases (KBs), and finally predicting the answering semantic units-SU (words, phrases and entities) through ranking (Yao and Van Durme, 2014) and reasoning (Kwok et al., 2001). However, in real-world environments, most people prefer the correct answer replied with a more natural way. For example, most existing <ÀîÁ¬½Ü£¬³öÉúµØµã£¬±±¾©> <ÀîÁ¬½Ü£¬¹ú¼®£¬ÐÂ¼ÓÆÂ> <ÀîÁ¬½Ü£¬³öÉúÄêÔ£¬1963Äê4ÔÂ26ÈÕ> ... ÀîÁ¬½ÜÊÇÄÄÀïÈË£¿ ÀîÁ¬½Ü³öÉúÓÚ ±±¾© £¬Ëû <ÀîÁ¬½Ü£¬ÐÔ±ð£¬ÄÐ> ÏÖÔÚÊÇ ÐÂ¼ÓÆÂ ¹ú¼®¡£ Copy Reasoning From Question From KB Question Response Jet Li where was Jet Li was born in Beijing. He is now a Singaporean citizen. Copying and Retrieving Predicting Copying from Question Retrieving from KB Question Natural Answer Do you know from ? Knowledge Base Figure 1: Incorporating copying and retrieving mechanisms in generating a natural answer. commercial products such as Siri1 will reply a natural answer “Jet Li is 1.64m in height.” for the question “How tall is Jet Li?”, rather than only answering one entity “1.64m”. Basic on this observation, we define the “natural answer” as the natural response in our daily communication for replying factual questions, which is usually expressed in a complete/partial natural language sentence rather than a single entity/phrase. In this case, the system needs to not only parse question, retrieve relevant facts from KB but also generate a proper reply. To this end, most previous approaches employed message-response patterns. Figure 1 schematically illustrates the major steps and features in this process. The system first needs to recognize the topic entity “Jet Li” in the question and then extract multiple related facts <Jet Li, gender, Male>, <Jet Li, birthplace, Beijing> and <Jet Li, nationality, Singapore> from KB. Based on the chosen facts and the commonly used messageresponse patterns “where was %entity from?” “%entity was born in %birthplace, %pronoun is %nationality citizen.”2, the system could finally generate the natural answer (McTear et al., 2016). In order to generate natural answers, typical 1http://www.apple.com/ios/siri/ 2In this pattern, %entity indicates the placeholder of the topic entity, %property indicates the property value of the topic entity. 199 products need lots of Natural Language Processing (NLP) tools and pattern engineering (McTear et al., 2016), which not only suffers from high costs of manual annotations for training data and patterns, but also have low coverage that cannot flexibly deal with variable linguistic phenomena in different domains. Therefore, this paper devotes to develop an end-to-end paradigm that generates natural answers without any NLP tools (e.g. POS tagging, parsing, etc.) and pattern engineering. This paradigm tries to consider question answering in an end-to-end framework. In this way, the complicated QA process, including analyzing question, retrieving relevant facts from KB, and generating correct, coherent, natural answers, could be resolved jointly. Nevertheless, generating natural answers in an end-to-end manner is not an easy task. The key challenge is that the words in a natural answer may be generated by different ways, including: 1) the common words usually are predicted using a (conditional) language model (e.g. “born” in Figure 1); 2) the major entities/phrases are selected from the source question (e.g. “Jet Li”); 3) the answering entities/phrases are retrieved from the corresponding KB (e.g. “Beijing”). In addition, some words or phrases even need to be inferred from related knowledge (e.g. “He” should be inferred from the value of “gender”). And we even need to deal with some morphological variants (e.g. “Singapore” in KB but “Singaporean” in answer). Although existing end-to-end models for KB-based question answering, such as GenQA (Yin et al., 2016), were able to retrieve facts from KBs with neural models. Unfortunately, they cannot copy SUs from the question in generating answers. Moreover, they could not deal with complex questions which need to utilize multiple facts. In addition, existing approaches for conversational (Dialogue) systems are able to generate natural utterances (Serban et al., 2016; Li et al., 2016) in sequence-tosequence learning (Seq2Seq). But they cannot interact with KB and answer information-inquired questions. For example, CopyNet (Gu et al., 2016) is able to copy words from the original source in generating the target through incorporating copying mechanism in conventional Seq2Seq learning, but they cannot retrieve SUs from external memory (e.g. KBs, Texts, etc.). Therefore, facing the above challenges, this paper proposes a neural generative model called COREQA with Seq2Seq learning, which is able to reply an answer in a natural way for a given question. Specifically, we incorporate COpying and REtrieving mechanisms within Seq2Seq learning. COREQA is able to analyze the question, retrieve relevant facts and generate a sequence of SUs using a hybrid method with a completely end-to-end learning framework. We conduct experiments on both synthetic data sets and real-world datasets, and the experimental results demonstrate the efficiency of COREQA compared with existing endto-end QA/Dialogue methods. In brief, our main contributions are as follows: • We propose a new and practical question answering task which devotes to generating natural answers for information inquired questions. It can be regarded as a fusion task of QA and Dialogue. • We propose a neural network based model, named as COREQA, by incorporating copying and retrieving mechanism in Seq2Seq learning. In our knowledge, it is the first end-to-end model that could answer complex questions in a natural way. • We implement experiments on both synthetic and real-world datasets. The experimental results demonstrate that the proposed model could be more effective for generating correct, coherent and natural answers for knowledge inquired questions compared with existing approaches. 2 Background: Neural Models for Sequence-to-Sequence Learning 2.1 RNN Encoder-Decoder Recurrent Neural Network (RNN) based EncoderDecoder is the backbone of Seq2Seq learning (Cho et al., 2014). In the Encoder-Decoder framework, an encoding RNN first transform a source sequential object X = [x1, ..., xLX] into an encoded representation c. For example, we can utilize the basic model: ht = f(xt, ht−1); c = φ(h1, ..., hLX), where {ht} are the RNN hidden states, c is the context vector which could be assumed as an abstract representation of X. In practice, gated RNN variants such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Chung et al., 2014) are commonly used for learning longterm dependencies. And the another encoding 200 Do you know where was Jet_Li from ? 𝒉1 𝒉2 𝒉3 𝒉4 𝒉5 𝒉6 𝒉7 𝒉8 Subject Property Object Jet_Li gender Male Jet_Li birthplace Beijing Jet_Li nationality Singapore Jet_Li birthdate 26 April 1963 … … … Attentive Read from Question Attentive Read from KB Copying from Question Retrieving from KB 𝒇1 𝒇2 𝒇3 𝒇4 𝒇… 𝒒 𝑠1 𝑠2 𝑠3 𝑠4 𝑠5 <eos> Jet_Li was born in Jet_Li was born in Beijing 𝑠5 … … … Softmax 𝑃Beijing = 𝑃𝑝𝑟(Beijing) + 𝑃𝑐𝑜(Beijing) + 𝑃re(Beijing) (a) Knowledge (facts) Retrieval (c) Decoder: Natural Answer Generation (d) Predicting, Copying (from Question) and Retrieving (from KB) DNN DNN KB Position Question Position Vocabulary DNN Question context KB context Question Copying History KB Retrieving History (e) State Update “in” embedding Copying “in” from Question Retrieving “in” from KB (b) Encoder: Question and KB Representation Figure 2: The overall diagram of COREQA. tricks is Bi-directional RNN, which connect two hidden states of positive time direction and negative time direction. Once the source sequence is encoded, another decoding RNN model is to generate a target sequence Y = [y1, ..., yLY ], through the following prediction model: st = f(yt−1, st−1, c); p(yt|y<t, X) = g(yt−1, st, c), where st is the RNN hidden state at time t, the predicted target word yt at time t is typically performed by a softmax classifier over a settled vocabulary (e.g. 30,000 words) through function g. 2.2 The Attention Mechanism The prediction model of classical decoders for each target word yi share the same context vector c. However, a fixed vector is not enough to obtain a better result on generating a long targets.The attention mechanism in the decoding can dynamically choose context ct at each time step (Bahdanau et al., 2014), for example, representing ct as the weighted sum of the source states {ht}, ct = XLX i=1 αtihi; αti = eρ(st−1,hi) P i′ eρ(st−1,h′ i) (1) where the function ρ use to compute the attentive strength with each source state, which usually adopts a neural network such as multi-layer perceptron (MLP). 2.3 The Copying Mechanism Seq2Seq learning heavily rely on the “meaning” for each word in source and target sequences, however, some words in sequences are “no-meaning” symbols and it is improper to encode them in encoding and decoding processes. For example, generating the response “Of course, read” for replying the message “Can you read the word ‘read’?” should not consider the meaning of the second “read”. By incorporating the copying mechanism, the decoder could directly copy the sub-sequences of source into the target (Vinyals et al., 2015). The basic approach is to jointly predict the indexes of the target word in the fixed vocabulary and/or matched positions in the source sequences (Gu et al., 2016; Gulcehre et al., 2016). 3 COREQA To generate natural answers for information inquired questions, we should first recognize key topics in the question, then extract related facts from KB, and finally fusion those instance-level knowledge with some global-level “smooth” and “glue” words to generate a coherent reply. In this section, we present COREQA, a differentiable Seq2Seq model to generate natural answers, which is able to analyze the question, retrieve relevant facts and predict SUs in an end-to-end fashion, and the predicted SUs may be predicted from the vo201 cabulary, copied from the given question, and/or retrieved from the corresponding KB. 3.1 Model Overview As illustrated in Figure 2, COREQA is an encoderdecoder framework plugged with a KB engineer. A knowledge retrieval module is firstly employed to retrieve related facts from KB by question analysis (see Section 3.2). And then the input question and the retrieved facts are transformed into the corresponding representations by Encoders (see Section 3.3). Finally, the encoded representations are feed to Decoder for generating the target natural answer (see Section 3.4). 3.2 Knowledge (facts) Retrieval We mainly focus on answering the information inquired questions (factual questions, and each question usually contains one or more topic entities). This paper utilizes the gold topic entities for simplifying our design. Given the topic entities, we retrieve the related facts from the corresponding KB. KB consists of many relational data, which usually are sets of inter-linked subject-propertyobject (SPO) triple statements. Usually, question contains the information used to match the subject and property parts in a fact triple, and answer incorporates the object part information. 3.3 Encoder The encoder transforms all discrete input symbols (including words, entities, properties and properties’ values) and their structures into numerical representations which are able to feed into neural models (Weston et al., 2014). 3.3.1 Question Encoding Following (Gu et al., 2016), a bi-directional RNN (Schuster and Paliwal, 1997) is used to transform the question sequence into a sequence of concatenated hidden states with two independent RNNs. The forward and backward RNN respectively obtain {−→h 1, ..., −→h LX} and {←−h LX, ..., ←−h 1}. The concatenated representation is considered to be the short-term memory of question (MQ = {ht}, ht = [−→h t, ←−h LX−t+1]). q = [−→h LX, ←−h 1] is used to represent the entire question, which could be used to compute the similarity between the question and the retrieved facts. 3.3.2 Knowledge Base Encoding We use s, p and o denote the subject, property and object (value) of one fact f, and es, ep and eo to denote its corresponding embeddings. The fact representation f is then defined as the concatenation of es, ep and eo. The list of all related facts’ representations, {f} = {f1, ..., fLF } (refer to MKB, LF denotes the maximum of candidate facts), is considered to be a short-term memory of KB while answering questions about the topic entities. In addition, given the distributed representation of question and candidate facts, we define the matching scores function between question and facts as S(q, fj) = DNN1(q, fj) = tanh(W2 · tanh(W1·[q, fj]+b1)+b2), , where DNN1 is the matching function defined by a two-layer perceptron, [·, ·] denotes vector concatenation, and W1, W2, b1 and b2 are the learning parameters. In fact, we will make a slight change of the matching function because it will also depend on the state of decoding process at different times. The modified function is S(q, st, fj) = DNN1(q, st, fj) where st is the hidden state of decoder at time t. 3.4 Decoder The decoder uses an RNN to generate a natural answer based on the short-term memory of question and retrieved facts which represented as MQ and MKB, respectively. The decoding process of COREQA have the following differences compared with the conventional decoder: Answer words prediction: COREQA predicts SUs based on a mixed probabilistic model of three modes, namely the predict-mode, the copy-mode and the retrieve-mode, where the first mode predicts words with the vocabulary, and the two latter modes pick SUs from the questions and matched facts, respectively; State update: the predicted word at step t −1 is used to update st, but COREQA uses not only its word embedding but also its corresponding positional attention informations in MQ and MKB ; Reading short-Memory MQ and MKB: MQ and MKB are fed into COREQA with two ways, the first one is the “meaning” with embeddings and the second one is the positions of different words (properties’ values). 3.4.1 Answer Words Prediction The generated words (entities) may come from vocabulary, source question and matched KB. Accordingly, our model use three correlative output 202 layer: shortlist prediction layer, question location copying layer and candidate-facts location retrieving layer, respectively. And we use the softmax classifier of the above three cascaded output layers to pick SUs. We assume a vocabulary V = {v1, ..., vN} ∪{UNK}, where UNK indicates any out-of-vocabulary (OOV) words. Therefore, we have adopted another two set of SUs XQ and XKB which cover words/entities in the source question and the partial KB. That is, we have adopted the instance-specific vocabulary V ∪XQ ∪XKB for each question. It’s important to note that these three vocabularies V, XQ and XKB may overlap. At each time step t in the decoding process, given the RNN state st together with MQ and MKB, the probabilistic function for generating any target SU yt is a “mixture” model as follow p(yt|st, yt−1, MQ, MKB) = ppr(yt|st, yt−1, ct) · pm(pr|st, yt−1)+ pco(yt|st, yt−1, MQ) · pm(co|st, yt−1)+ pre(yt|st, yt−1, MKB) · pm(re|st, yt−1) (2) where pr, co and re stand for the predict-mode, the copy-mode and the retrieve-mode, respectively, pm(·|·) indicates the probability model for choosing different modes (we use a softmax classifier with two-layer MLP). The probability of the three modes are given by ppr(yt|·) = 1 Z eψpr(yt) pco(yt|·) = 1 Z X j:Qj=yt eψco(yt) pre(yt|·) = 1 Z X j:KBj=yt eψre(yt) (3) where ψpr(·), ψco(·) and ψre(·) are score functions for choosing SUs in predict-mode (from V), copy-mode (from XQ) and retrieve-mode (from XKB), respectively. And Z is the normalization term shared by the three modes, Z = eψpr(v) + P j:Qj=v eψco(v) + P j:KBj=v eψre(v). And the three modes could compete with each other through a softmax function in generating target SUs with the shared normalization term (as shown in Figure 2. Specifically, the scoring functions of each mode are defined as follows: Predict-mode: Some generated words need reasoning (e.g. “He” in Figure 1) and morphological transformation (e.g. “Singaporean” in Figure 1). Therefore, we modify the function as ψpr(yt = vi) = vT i Wpr[st, cqt, ckbt] , where vi ∈Rdo is the word vector at the output layer (not the input word embedding), Wpr ∈R(dh+di+df)×do (di, dh and df indicate the size of input word vector, RNN decoder hidden state and fact representation respectively), and cqt and ckbt are the temporary memory of reading MQ and MKB at time t (see Section 3.4.3). Copy-mode: The score for “copying” the word xj from question Q is calculated as ψco(yt = xj) = DNN2(hj, st, histQ) , where DNN2 is a neural network function with a two-layer MLP and histQ ∈RLX is an accumulated vector which record the attentive history for each word in question (similar with the coverage vector in (Tu et al., 2016)). Retrieve-mode: The score for “retrieving” the entity word vj from retrieval facts (“Object” part) is calculated as ψre(yt = vj) = DNN3(fj, st, histKB) , where DNN3 is also a neural network function and histKB ∈RLF is an accumulated vector which record the attentive history for each fact in candidate facts. 3.4.2 State Update In the generic decoding process, each RNN hidden state st is updated with the previous state st−1, the word embedding of previous predicted symbol yt−1, and an optional context vector ct (with attention mechanism). However, yt−1 may not come from vocabulary V and not owns a word vector. Therefore, we modify the state update process in COREQA. More specifically, yt−1 will be represented as concatenated vector of [e(yt−1), rqt−1, rkbt−1], where e(yt−1) is the word embedding associated with yt−1, rqt−1 and rkbt−1 are the weighted sum of hidden states in MQ and MKB corresponding to yt−1 respectively. rqt = XLX j=1 ρtjhj, rkbt = XLF j=1 δtjfj ρtj =    1 K1 pco(xj|·), xj = yt 0 otherwise δtj =    1 K2 pre(fj|·), object(fj) = yt 0 otherwise (4) where object(f) indicate the “object” part of fact f (see Figure 2), and K1 and K2 are the normalization terms which equal P j′:x′ j=yt pco(x′ j|·) and P j′:object(f′ j)=yt pre(f′ j|·), respectively, and it 203 could consider the multiple positions matching yt in source question and KB. 3.4.3 Reading short-Memory MQ and MKB COREQA employ the attention mechanism at decoding process. At each decoder time t, we selective read the context vector cqt and ckbt from the short-term memory of question MQ and retrieval facts MKB (alike to Formula 1). In addition, the accumulated attentive vectors histQ and histKB are able to record the positional information of SUs in the source question and retrieved facts. 3.5 Training Although some target SUs in answer are copied and retrieved from the source question and the external KB respectively, COREQA is fully differential and can be optimized in an end-to-end manner using back-propagation. Given the batches of the source questions {X}M and target answers {Y }M both expressed with natural language (symbolic sequences), the objective function is to minimize the negative log-likelihood: L = −1 N M X k=1 LY X t=1 log[p(y(k) t |y(k) <t , X(k)] (5) where the superscript (k) indicates the index of one question-answer (Q-A) pair. The network is no need for any additional labels for training models, because the three modes sharing the same softmax classifier for predicting target words, they can learn to coordinate with each other by maximizing the likelihood of observed Q-A pairs. 4 Experiments In this section, we present our main experimental results in two datasets. The first one is a small synthetic dataset in a restricted domain (only involving four properties of persons) (Section 4.1). The second one is a big dataset in open domain, where the Q-A pairs are extracted from community QA website and grounded against a KB with an Integer Linear Programming (ILP) method (Section 4.2). COREQA and all baseline models are trained on a NVIDIA TITAN X GPU using TensorFlow3 tools, where we used the Adam (Kingma and Ba, 2014) learning rule to update gradients in all experimental configures. The sources codes and data will be 3https://www.tensorflow.org/ released at the personal homepage of the first author4. 4.1 Natural QA in Restricted Domain Task: The QA systems need to answer questions involving 4 concrete properties of birthdate (including year, month and day) and gender). Through merely involving 4 properties, there are plenty of QA patterns which focus on different aspects of birthdate, for example, “What year were you born?” touches on “year”, but “When is your birthday?” touches on “month and day”. Dataset: Firstly, 108 different Q-A patterns have been constructed by two annotators, one in charge of raising question patterns and another one is responsible for generating corresponding suitable answer patterns, e.g. When is %e birthday? →She was born in %m %dth. where the variables %e, %y, %m, %d and %g (deciding she or he) indicates the person’s name, birth year, birth month, birth day and gender, respectively. Then we randomly generate a KB which contains 80,000 person entities, and each entity including four facts. Given KB facts, we can finally obtain specific Q-A pairs. And the sampling KB, patterns, and the generated QA pairs are shown in Table 1. In order to maintain the diversity, we randomly select 6 patterns for each person. Finally, we totally obtain 239,934 sequences pairs (half patterns may be unmatched because of “gender” property). Q-A Patterns Examples (e.g. KB facts (e2,year,1987);(e2,month,6); (e2,day,20);(e2,gender,male)) When is %e birthday? When is e2 birthday? He was born in %m %dth. He was born in June 20th. What year were %e born? What year were e2 born? %e is born in %y year. e2 is born in 1987 year. Table 1: Sample KB facts, patterns and their generated Q-A pairs. Experimental Setting: The total 239,934 Q-A pairs are split into training (90%) and testing set (10%). The baseline includes 1) generic RNN Encoder-Decoder (marked as RNN), 2) Seq2Seq with attention (marked as RNN+atten), 3) CopyNet, and 4) GenQA. For a fair comparison, we use bi-directional LSTM for encoder and another LSTM for decoder for all Seq2Seq models, with hidden layer size = 600 and word embedding dimen4http://www.nlpr.ia.ac.cn/cip/shizhuhe/publications.html 204 sion = 200. We set LF as 5. Metrics: We adopt (automatic evaluation (AE) to test the effects of different models. AE considers the precisions of the entire predicted answers and four specific properties, and the answer is complete correct only when all predicted properties’ values is right. To measure the performance of the proposed method, we select following metrics, including Pg5, Py, Pm and Pd which denote the precisions for ‘gender’, ‘year’, ‘month’ and ‘day’ properties, respectively. And PA, RA and F1A indicate the precision, recall and F1 in the complete way. Experimental Results: The AE experimental results are shown in Table 2. It is very clear from Table 2 that COREQA significantly outperforms all other compared methods. The reason of the GenQA’s poor performance is that all synthetic questions need multiple facts, and GenQA will “safely” choose the most frequent property (“gender”) for all questions. We also found the performances on “year” and “day” have a little worse than other properties such as “gender”, it may because there have more ways to answer questions about “year” and “day”. Models Pg Py Pm Pd PA RA F1A RNN 72.2 0 1.1 0.2 0 27.5 0 RNN+atten 55.8 1.1 11.3 9.5 1.7 34 3.2 CopyNet 75.2 8.7 28.3 5.8 3.7 32.5 6.7 GenQA 73.4 0 0 0 0 27.1 0 COREQA 100 84.8 93.4 81 87.4 94 90.6 Table 2: The AE results (%) on synthetic test data. Discussion: Because of the feature of directly “hard” copy and retrieve SUs from question and KB, COREQA could answer questions about unseen entities.To evaluate the effects of answering questions about unseen entities, we re-construct 2,000 new person entities and their corresponding facts about four known properties, and obtain 6,081 Q-A pairs through matching the sampling patterns mentioned above. The experimental results are shown in Table 3, it can be seen that the performance did not fall too much. Entities Pg Py Pm Pd PA RA F1A Seen 100 84.8 93.4 81 87.4 94 90.6 Unseen 75.1 84.5 93.5 81.2 63.8 85.1 73.1 Table 3: The AE (%) for seen and unseen entities. 5The “gender” is right when the entity name (e.g. ‘e2’) or the personal pronoun (e.g. ‘She’) in answer is correct. 4.2 Natural QA in Open Domain Task: To test the performance of the proposed approach in open domains, we modify the task of GenQA (Yin et al., 2016) for supporting multifacts (a typical example is shown in Figure 1). That is, a natural QA system should generate a sequence of SUs as the natural answer for a given natural language question through interacting with a KB. Dataset: GenQA have released a corpus6, which contains a crawling KB and a set of ground QA pairs. However, the original Q-A pairs only matched with just one single fact. In fact, we found that a lot of questions need more than one fact (about 20% based on sampling inspection). Therefore, we crawl more Q-A pairs from Chinese community QA website (Baidu Zhidao7). Combined with the originally published corpus, we create a lager and better-quality data for natural question answering. Specifically, an Integral Linear Programming (ILP) based method is employed to automatically construct “grounding” Q-A pairs with the facts in KB (inspired by the work of adopting ILP to parse questions (Yahya et al., 2012)). In ILP, the main constraints and considered factors are listed below: 1) the “subject” entity and “object” entity of a triple have to match with question words/phrases (marked as subject mention) and answer words/phrases (marked as object mention) respectively; 2) any two subject mentions or object mentions should not overlap; 3) a mention can match at most one entity; 4) the edit distance between the Q-A pair and the matched candidate fact (use a space to joint three parts) is smaller, they are more relevant. Finally, we totally obtain 619,199 instances (an instance contains a question, an answer, and multiple facts), and the number of instances that can match one and multiple facts in KB are 499,809 and 119,390, respectively. Through the evaluation of 200 sampling instances, we estimate that approximate 81% matched facts are helpful for the generating answers. However, strictly speaking, only 44% instances are truly correct grounding. In fact, grounding the Q-A pairs from community QA website is a very challenge problem, we will leave it in the future work. Experimental Setting: The dataset is split into training (90%) and testing set (10%). The sen6https://github.com/jxfeb/Generative QA 7https://zhidao.baidu.com/ 205 tences in Chinese are segmented into word sequences with Jieba8 tool. And we use the words with the frequency larger than 3, which covering 98.4% of the word in the corpus. For a fair comparison, we use bi-directional LSTM for the encoder and another LSTM for decoder for all Seq2Seq models, with hidden layer size = 1024 and word embedding dimension = 300. We select CopyNet (more advanced Seq2Seq model) and GenQA for comparison. We set LF as 10. Metrics: Besides adopting the AE as a metric (same as GenQA (Yin et al., 2016)), we additionally use manual evaluation (ME) as another metric. ME considers three aspects about the quality of the generated answer (refer to (Asghar et al., 2016)): 1) correctness; 2) syntactical fluency; 3) coherence with the question. We employ two annotators to rate such three aspects of CopyNet, GenQA and COREQA. Specifically, we sample 100 questions, and conduct C2 3 = 3 pair-wise comparisons for each question and count the winning times of each model (comparisons may both win or both lose). Experimental Results: The AE and ME results are shown in Table 4 and Table 5, respectively. Meanwhile, we separately present the results according to the number of the facts which a question needs in KB, including just one single fact (marked as Single), multiple facts (marked as Multi) and all (marked as Mixed). In fact, we train two separate models for Single and Multi questions for the unbalanced data . From Table 4 and Table 5, we can clearly observe that COREQA significantly outperforms all other baseline models. And COREQA could generate a better natural answer in three aspects: correctness, fluency and coherence. CopyNet cannot interact with KB which is important to generate correct answers. For example, for “Who is the director of The Little Chinese Seamstress?”, if without the fact (The Little Chinese Seamstress, director, Dai Siji), QA systems cannot generate a correct answer. Models Single Multi Mixed CopyNet 9.7 0.8 8.7 GenQA 47.2 28.9 45.1 COREQA 58.4 42.7 56.6 Table 4: The AE accuracies (%) on real world test data. 8https://github.com/fxsjy/jieba Models Correctness Fluency Coherence CopyNet 0 13.3 3.3 GenQA 26.7 33.3 20 COREQA 46.7 50 60 Table 5: The ME results (%) on sampled mixed test data. Case Study and Error Analysis: Table 6 gives some examples of generated by COREQA and the gold answers to the questions in test set. It is very clearly seen that the parts of generating SUs are predicted from the vocabulary, and other SUs are copied from the given question (marked as bold) and retrieved from the KB (marked as underline). And we analyze sampled examples and believe that there are several major causes of errors: 1) did not match the right facts (ID 6); 2) the generated answers contain some repetition of meaningless words (ID 7); 3) the generated answers are not coherence natural language sentences (ID 8). 5 Related Work Seq2Seq learning is to maximize the likelihood of predicting the target sequence Y conditioned on the observed source sequence X (Sutskever et al., 2014), which has been applied successfully to a large number of NLP tasks such as Machine Translation (Wu et al., 2016) and Dialogue (Vinyals and Le, 2015). Our work is partially inspired by the recent work of QA and Dialogue which have adopted Seq2Seq learning. CopyNet (Gu et al., 2016) and Pointer Networks (Vinyals et al., 2015; Gulcehre et al., 2016) which could incorporate copying mechanism in conventional Seq2Seq learning. Different from our application which deals with knowledge inquired questions and generates natural answers, CopyNet (Gu et al., 2016) and Pointer Networks (Gulcehre et al., 2016) can only copy words from the original input sequence. In contrast, COREQA is able to retrieve SUs from external memory. And GenQA (Yin et al., 2016) can only deal with the simple questions which could be answered by one fact, and it also did not incorporate the copying mechanism in Seq2Seq learning. Moreover, our work is also inspired by Neural Abstract Machine (Graves et al., 2016; Yin et al., 2015; Liang et al., 2016) which could retrieve facts from KBs with neural models. Unlike natural answer, Neural Abstract Machine (Mou et al., 2016) is concentrating on obtaining concrete answer en206 ID Question Gold Answer Generated Natural Answer 1 ~;M‚pÅd´=I<º =I<œ\<ŒÑ) \<Œ<§=I< Which country did Hargreaves of Bayern comes from? British! born in Canada Canadians, British 2 âyx´=‡xº Çxy3´éÜIB“Ö• éÜIB“Ö•§´Çx Which ethnic groups is Sha Zukang from? Han, now he is the DSG of the UN DSG of the UN, Han 3 ÛËA#Óù´Xº ´˜‡Š[!ÑW[ Û Û ÛË Ë ËA A A# # #Ó Ó Óù ù ù‡Š[ óÆ[ Who is Robert Schumann? a writer, musician Robert Schuhmann is a writer and philosopher. 4 êdƒ.ì´Xº ¦´˜¶v¥$Ä ˜‡`Dv¥ $Ä Who is Mascherano? He is a football player An excellent football player 5 nSކà¿ü´Xº •g#   ü ü ü´•g# Who is the director of The Little Chinese Seamstress? Dai Sijie Director Dai Sijie 6 >Kcn´Xûº MŽ!*À!Új¸ ¾f Who shot the movie The Iron Triangle? Tsui Hark, Johnny To, Ringo Lam Feng Xiaogang 7 X•R6Iù‡<˜ •[]º å˜< ´´´ Who knows some details of Xi Murong? poetess yes, yes, yes 8 ,´=‡úimuº AT´þ°•Œ þ°•Œ Which company developed the game Crazy Arcade? should be the Shanda Group playing Shanda Group Table 6: Examples of the generated natural answers by COREQA. tities with neural network based reasoning. 6 Conclusion and Future Work In this paper, we propose an end-to-end system to generate natural answers through incorporating copying and retrieving mechanisms in sequenceto-sequence learning. Specifically, the sequences of SUs in the generated answer may be predicted from the vocabulary, copied from the given question and retrieved from the corresponding KB. And the future work includes: a) lots of questions cannot be answered directly by facts in a KB (e.g. “Who is Jet Li’s father-in-law?”), we plan to learn QA system with latent knowledge (e.g. KB embedding (Bordes et al., 2013)); b) we plan to adopt memory networks (Sukhbaatar et al., 2015) to encode the temporary KB for each question. Acknowledgments The authors are grateful to anonymous reviewers for their constructive comments. The work was supported by the Natural Science Foundation of China (No.61533018) and the National High Technology Development 863 Program of China (No.2015AA015405). References Nabiha Asghar, Pascal Poupart, Jiang Xin, and Hang Li. 2016. Online sequence-to-sequence reinforcement learning for open-domain conversational agents. arXiv preprint arXiv:1612.03929 . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems. pages 2787–2795. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building watson: An overview of the deepqa project. AI magazine 31(3):59–79. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016. Hybrid computing using a neural network with dynamic external memory. Nature 538(7626):471–476. Jiatao Gu, Zhengdong Lu, Hang Li, and O.K. Victor Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1631–1640. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 140–149. 207 Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Cody Kwok, Oren Etzioni, and Daniel S Weld. 2001. Scaling question answering to the web. ACM Transactions on Information Systems (TOIS) 19(3):242– 262. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1192–1202. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. 2016. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. arXiv preprint arXiv:1611.00020 . Vanessa Lopez, Victoria Uren, Marta Sabou, and Enrico Motta. 2011. Is question answering fit for the semantic web?: a survey. Semantic Web 2(2):125– 155. Michael McTear, Zoraida Callejas, and David Griol. 2016. The Conversational Interface: Talking to Smart Devices. Springer Publishing Company, Incorporated, 1st edition. Lili Mou, Zhengdong Lu, Hang Li, and Zhi Jin. 2016. Coupling distributed and symbolic execution for natural language queries. arXiv preprint arXiv:1612.02741 . Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673–2681. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-16). Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems. pages 2440–2448. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Coverage-based neural machine translation. arXiv preprint arXiv:1601.04811 . Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. pages 2692–2700. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869 . Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. CoRR abs/1410.3916. William A Woods. 1977. Lunar rocks in natural english: Explorations in natural language question answering. In Linguistic structures processing. pages 521–569. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 . Mohamed Yahya, Klaus Berberich, Shady Elbassuoni, Maya Ramanath, Volker Tresp, and Gerhard Weikum. 2012. Natural language questions for the web of data. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 379–390. Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 956–966. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1321–1331. Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016. Neural generative question answering. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2015. Neural enquirer: Learning to query tables. arXiv preprint arXiv:1512.00965 . 208
2017
19
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2078–2088 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1190 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2078–2088 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1190 Revisiting Recurrent Networks for Paraphrastic Sentence Embeddings John Wieting Kevin Gimpel Toyota Technological Institute at Chicago, Chicago, IL, 60637, USA {jwieting,kgimpel}@ttic.edu Abstract We consider the problem of learning general-purpose, paraphrastic sentence embeddings, revisiting the setting of Wieting et al. (2016b). While they found LSTM recurrent networks to underperform word averaging, we present several developments that together produce the opposite conclusion. These include training on sentence pairs rather than phrase pairs, averaging states to represent sequences, and regularizing aggressively. These improve LSTMs in both transfer learning and supervised settings. We also introduce a new recurrent architecture, the GATED RECURRENT AVERAGING NETWORK, that is inspired by averaging and LSTMs while outperforming them both. We analyze our learned models, finding evidence of preferences for particular parts of speech and dependency relations. 1 1 Introduction Modeling sentential compositionality is a fundamental aspect of natural language semantics. Researchers have proposed a broad range of compositional functional architectures (Mitchell and Lapata, 2008; Socher et al., 2011; Kalchbrenner et al., 2014) and evaluated them on a large variety of applications. Our goal is to learn a generalpurpose sentence embedding function that can be used unmodified for measuring semantic textual similarity (STS) (Agirre et al., 2012) and can also serve as a useful initialization for downstream tasks. We wish to learn this embedding function 1Trained models and code are available at http:// ttic.uchicago.edu/˜wieting. such that sentences with high semantic similarity have high cosine similarity in the embedding space. In particular, we focus on the setting of Wieting et al. (2016b), in which models are trained on noisy paraphrase pairs and evaluated on both STS and supervised semantic tasks. Surprisingly, Wieting et al. found that simple embedding functions—those based on averaging word vectors—outperform more powerful architectures based on long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997). In this paper, we revisit their experimental setting and present several techniques that together improve the performance of the LSTM to be superior to word averaging. We first change data sources: rather than train on noisy phrase pairs from the Paraphrase Database (PPDB; Ganitkevitch et al., 2013), we use noisy sentence pairs obtained automatically by aligning Simple English to standard English Wikipedia (Coster and Kauchak, 2011). Even though this data was intended for use by text simplification systems, we find it to be efficient and effective for learning sentence embeddings, outperforming much larger sets of examples from PPDB. We then show how we can modify and regularize the LSTM to further improve its performance. The main modification is to simply average the hidden states instead of using the final one. For regularization, we experiment with two kinds of dropout and also with randomly scrambling the words in each input sequence. We find that these techniques help in the transfer learning setting and on two supervised semantic similarity datasets as well. Further gains are obtained on the supervised tasks by initializing with our models from the transfer setting. Inspired by the strong performance of both averaging and LSTMs, we introduce a novel recurrent neural network architecture which we call 2078 the GATED RECURRENT AVERAGING NETWORK (GRAN). The GRAN outperforms averaging and the LSTM in both the transfer and supervised learning settings, forming a promising new recurrent architecture for semantic modeling. 2 Related Work Modeling sentential compositionality has received a great deal of attention in recent years. A comprehensive survey is beyond the scope of this paper, but we mention popular functional families: neural bag-of-words models (Kalchbrenner et al., 2014), deep averaging networks (DANs) (Iyyer et al., 2015), recursive neural networks using syntactic parses (Socher et al., 2011, 2012, 2013; ˙Irsoy and Cardie, 2014), convolutional neural networks (Kalchbrenner et al., 2014; Kim, 2014; Hu et al., 2014), and recurrent neural networks using long short-term memory (Tai et al., 2015; Ling et al., 2015; Liu et al., 2015). Simple operations based on vector addition and multiplication typically serve as strong baselines (Mitchell and Lapata, 2008, 2010; Blacoe and Lapata, 2012). Most work cited above uses a supervised learning framework, so the composition function is learned discriminatively for a particular task. In this paper, we are primarily interested in creating general purpose, domain independent embeddings for word sequences. Several others have pursued this goal (Socher et al., 2011; Le and Mikolov, 2014; Pham et al., 2015; Kiros et al., 2015; Hill et al., 2016; Arora et al., 2017; Pagliardini et al., 2017), though usually with the intent to extract useful features for supervised sentence tasks rather than to capture semantic similarity. An exception is the work of Wieting et al. (2016b). We closely follow their experimental setup and directly address some outstanding questions in their experimental results. Here we briefly summarize their main findings and their attempts at explaining them. They made the surprising discovery that word averaging outperforms LSTMs by a wide margin in the transfer learning setting. They proposed several hypotheses for why this occurs. They first considered that the LSTM was unable to adapt to the differences in sequence length between phrases in training and sentences in test. This was ruled out by showing that neither model showed any strong correlation between sequence length and performance on the test data. They next examined whether the LSTM was overfitting on the training data, but then showed that both models achieve similar values of the training objective and similar performance on indomain held-out test sets. Lastly, they considered whether their hyperparameters were inadequately tuned, but extensive hyperparameter tuning did not change the story. Therefore, the reason for the performance gap, and how to correct it, was left as an open problem. This paper takes steps toward addressing that problem. 3 Models and Training 3.1 Models Our goal is to embed a word sequence s into a fixed-length vector. We focus on three compositional models in this paper, all of which use words as the smallest unit of compositionality. We denote the tth word in s as st, and we denote its word embedding by xt. Our first two models have been well-studied in prior work, so we describe them briefly. The first, which we call AVG, simply averages the embeddings xt of all words in s. The only parameters learned in this model are those in the word embeddings themselves, which are stored in the word embedding matrix Ww. This model was found by Wieting et al. (2016b) to perform very strongly for semantic similarity tasks. Our second model uses a long short-term memory (LSTM) recurrent neural network (Hochreiter and Schmidhuber, 1997) to embed s. We use the LSTM variant from Gers et al. (2003) including its “peephole” connections. We consider two ways to obtain a sentence embedding from the LSTM. The first uses the final hidden vector, which we denote h−1. The second, denoted LSTMAVG, averages all hidden vectors of the LSTM. In both variants, the learnable parameters include both the LSTM parameters Wc and the word embeddings Ww. Inspired by the success of the two models above, we propose a third model, which we call the GATED RECURRENT AVERAGING NETWORK (GRAN). The GATED RECURRENT AVERAGING NETWORK combines the benefits of AVG and LSTMs. In fact it reduces to AVG if the output of the gate is all ones. We first use an LSTM to generate a hidden vector, ht, for each word st in s. Then we use ht to compute a gate that will be elementwise-multiplied with xt, resulting in a new, gated hidden vector at for each step t: at = xt ⊙σ(Wxxt + Whht + b) (1) 2079 where Wx and Wh are parameter matrices, b is a parameter vector, and σ is the elementwise logistic sigmoid function. After all at have been generated for a sentence, they are averaged to produce the embedding for that sentence. This model includes as learnable parameters those of the LSTM, the word embeddings, and the additional parameters in Eq. (1). For both the LSTM and GRAN models, we use Wc to denote the “compositional” parameters, i.e., all parameters other than the word embeddings. The motivation for the GRAN is that we are contextualizing the word embeddings prior to averaging. The gate can be seen as an attention, attending to the prior context of the sentence.2 We also experiment with four other variations of this model, though they generally were more complex and showed inferior performance. In the first, GRAN-2, the gate is applied to ht (rather than xt) to produce at, and then these at are averaged as before. GRAN-3 and GRAN-4 use two gates: one applied to xt and one applied to at−1. We tried two different ways of computing these gates: for each gate i, σ(Wxixt +Whiht +bi) (GRAN-3) or σ(Wxixt + Whiht + Waiat−1 + bi) (GRAN-4). The sum of these two terms comprised at. In this model, the last average hidden state, a−1, was used as the sentence embedding after dividing it by the length of the sequence. In these models, we are additionally keeping a running average of the embeddings that is being modified by the context at every time step. In GRAN-4, this running average is also considered when producing the contextualized word embedding. Lastly, we experimented with a fifth GRAN, GRAN-5, in which we use two gates, calculated by σ(Wxixt + Whiht + bi) for each gate i. The first is applied to xt and the second is applied to ht. The output of these gates is then summed. Therefore GRAN-5 can be reduced to either wordaveraging or averaging LSTM states, depending on the behavior of the gates. If the first gate is all ones and the second all zeros throughout the sequence, the model is equivalent to wordaveraging. Conversely, if the first gate is all zeros and the second is all ones throughout the sequence, the model is equivalent to averaging the 2We tried a variant of this model without the gate. We obtain at from f(Wxxt +Whht +b), where f is a nonlinearity, tuned over tanh and ReLU. The performance of the model is significantly worse than the GRAN in all experiments. LSTM states. Further analysis of these models is included in Section 4. 3.2 Training We follow the training procedure of Wieting et al. (2015) and Wieting et al. (2016b), described below. The training data consists of a set S of phrase or sentence pairs ⟨s1, s2⟩from either the Paraphrase Database (PPDB; Ganitkevitch et al., 2013) or the aligned Wikipedia sentences (Coster and Kauchak, 2011) where s1 and s2 are assumed to be paraphrases. We optimize a margin-based loss: min Wc,Ww 1 |S| X ⟨s1,s2⟩∈S max(0, δ −cos(g(s1), g(s2)) + cos(g(s1), g(t1))) + max(0, δ −cos(g(s1), g(s2)) + cos(g(s2), g(t2)))  + λc ∥Wc∥2 + λw ∥Wwinitial −Ww∥2 (2) where g is the model in use (e.g., AVG or LSTM), δ is the margin, λc and λw are regularization parameters, Wwinitial is the initial word embedding matrix, and t1 and t2 are carefully-selected negative examples taken from a mini-batch during optimization. The intuition is that we want the two phrases to be more similar to each other (cos(g(s1), g(s2))) than either is to their respective negative examples t1 and t2, by a margin of at least δ. 3.2.1 Selecting Negative Examples To select t1 and t2 in Eq. (2), we simply choose the most similar phrase in some set of phrases (other than those in the given phrase pair). For simplicity we use the mini-batch for this set, but it could be a different set. That is, we choose t1 for a given ⟨s1, s2⟩as follows: t1 = argmax t:⟨t,·⟩∈Sb\{⟨s1,s2⟩} cos(g(s1), g(t)) where Sb ⊆S is the current mini-batch. That is, we want to choose a negative example ti that is similar to si according to the current model. The downside is that we may occasionally choose a phrase ti that is actually a true paraphrase of si. 4 Experiments Our experiments are designed to address the empirical question posed by Wieting et al. (2016b): why do LSTMs underperform AVG for transfer 2080 learning? In Sections 4.1.2-4.2, we make progress on this question by presenting methods that bridge the gap between the two models in the transfer setting. We then apply these same techniques to improve performance in the supervised setting, described in Section 4.3. In both settings we also evaluate our novel GRAN architecture, finding it to consistently outperform both AVG and the LSTM. 4.1 Transfer Learning 4.1.1 Datasets and Tasks We train on large sets of noisy paraphrase pairs and evaluate on a diverse set of 22 textual similarity datasets, including all datasets from every SemEval semantic textual similarity (STS) task from 2012 to 2015. We also evaluate on the SemEval 2015 Twitter task (Xu et al., 2015) and the SemEval 2014 SICK Semantic Relatedness task (Marelli et al., 2014). Given two sentences, the aim of the STS tasks is to predict their similarity on a 0-5 scale, where 0 indicates the sentences are on different topics and 5 indicates that they are completely equivalent. We report the average Pearson’s r over these 22 sentence similarity tasks. Each STS task consists of 4-6 datasets covering a wide variety of domains, including newswire, tweets, glosses, machine translation outputs, web forums, news headlines, image and video captions, among others. Further details are provided in the official task descriptions (Agirre et al., 2012, 2013, 2014, 2015). 4.1.2 Experiments with Data Sources We first investigate how different sources of training data affect the results. We try two data sources. The first is phrase pairs from the Paraphrase Database (PPDB). PPDB comes in different sizes (S, M, L, XL, XXL, and XXXL), where each larger size subsumes all smaller ones. The pairs in PPDB are sorted by a confidence measure and so the smaller sets contain higher precision paraphrases. PPDB is derived automatically from naturally-occurring bilingual text, and versions of PPDB have been released for many languages without the need for any manual annotation (Ganitkevitch and Callison-Burch, 2014). The second source of data is a set of sentence pairs automatically extracted from Simple English Wikipedia and English Wikipedia articles by Coster and Kauchak (2011). This data was extracted for developing text simplification AVG LSTM LSTMAVG PPDB 67.7 54.2 64.2 SimpWiki 68.4 59.3 67.5 Table 1: Test results on SemEval semantic textual similarity datasets (Pearson’s r ×100) when training on different sources of data: phrase pairs from PPDB or simple-to-standard English Wikipedia sentence pairs from Coster and Kauchak (2011). systems, where each instance pairs a simple and complex sentence representing approximately the same information. Though the data was obtained for simplification, we use it as a source of training data for learning paraphrastic sentence embeddings. The dataset, which we call SimpWiki, consists of 167,689 sentence pairs. To ensure a fair comparison, we select a sample of pairs from PPDB XL such that the number of tokens is approximately the same as the number of tokens in the SimpWiki sentences.3 We use PARAGRAM-SL999 embeddings (Wieting et al., 2015) to initialize the word embedding matrix (Ww) for all models. For all experiments, we fix the mini-batch size to 100, and λc to 0. We tune the margin δ over {0.4, 0.6, 0.8} and λw over {10−4, 10−5, 10−6, 10−7, 10−8, 0}. We train AVG for 7 epochs, and the LSTM for 3, since it converges much faster and does not benefit from 7 epochs. For optimization we use Adam (Kingma and Ba, 2015) with a learning rate of 0.001. We use the 2016 STS tasks (Agirre et al., 2016) for model selection, where we average the Pearson’s r over its 5 datasets. We refer to this type of model selection as test. For evaluation, we report the average Pearson’s r over the 22 other sentence similarity tasks. The results are shown in Table 1. We first note that, when training on PPDB, we find the same result as Wieting et al. (2016b): AVG outperforms the LSTM by more than 13 points. However, when training both on sentence pairs, the gap shrinks to about 9 points. It appears that part of the inferior performance for the LSTM in prior work was due to training on phrase pairs rather than on sentence pairs. The AVG model also benefits from training on sentences, but not nearly as much as the LSTM.4 3The PPDB data consists of 1,341,188 phrase pairs and contains 3 more tokens than the SimpWiki data. 4We experimented with adding EOS tags at the end of training and test sentences, SOS tags at the start of train2081 Our hypothesis explaining this result is that in PPDB, the phrase pairs are short fragments of text which are not necessarily constituents or phrases in any syntactic sense. Therefore, the sentences in the STS test sets are quite different from the fragments seen during training. We hypothesize that while word-averaging is relatively unaffected by this difference, the recurrent models are much more sensitive to overall characteristics of the word sequences, and the difference between train and test matters much more. These results also suggest that the SimpWiki data, even though it was developed for text simplification, may be useful for other researchers working on semantic textual similarity tasks. 4.1.3 Experiments with LSTM Variations We next compare LSTM and LSTMAVG. The latter consists of averaging the hidden vectors of the LSTM rather than using the final hidden vector as in prior work (Wieting et al., 2016b). We hypothesize that the LSTM may put more emphasis on the words at the end of the sentence than those at the beginning. By averaging the hidden states, the impact of all words in the sequence is better taken into account. Averaging also makes the LSTM more like AVG, which we know to perform strongly in this setting. The results on AVG and the LSTM models are shown in Table 1. When training on PPDB, moving from LSTM to LSTMAVG improves performance by 10 points, closing most of the gap with AVG. We also find that LSTMAVG improves by moving from PPDB to SimpWiki, though in both cases it still lags behind AVG. 4.2 Experiments with Regularization We next experiment with various forms of regularization. Previous work (Wieting et al., 2016b,a) only used L2 regularization. Wieting et al. (2016b) also regularized the word embeddings back to their initial values. Here we use L2 regularization ing and test sentences, adding both, and adding neither. We treated adding these tags as hyperparameters and tuned over these four settings along with the other hyperparameters in the original experiment. Interestingly, we found that adding these tags, especially EOS, had a large effect on the LSTM when training on SimpWiki, improving performance by 6 points. When training on PPDB, adding EOS tags only improved performance by 1.6 points. The addition of the tags had a smaller effect on LSTMAVG. Adding EOS tags improved performance by 0.3 points on SimpWiki and adding SOS tags on PPDB improved performance by 0.9 points. as well as several additional regularization methods we describe below. We try two forms of dropout. The first is just standard dropout (Srivastava et al., 2014) on the word embeddings. The second is “word dropout”, which drops out entire word embeddings with some probability (Iyyer et al., 2015). We also experiment with scrambling the inputs. For a given mini-batch, we go through each sentence pair and, with some probability, we shuffle the words in each sentence in the pair. When scrambling a sentence pair, we always shuffle both sentences in the pair. We do this before selecting negative examples for the mini-batch. The motivation for scrambling is to make it more difficult for the LSTM to memorize the sequences in the training data, forcing it to focus more on the identities of the words and less on word order. Hence it will be expected to behave more like the word averaging model.5 We also experiment with combining scrambling and dropout. In this setting, we tune over scrambling with either word dropout or dropout. The settings for these experiments are largely the same as those of the previous section with the exception that we tune λw over a smaller set of values: {10−5, 0}. When using L2 regularization, we tune λc over {10−3, 10−4, 10−5, 10−6}. When using dropout, we tune the dropout rate over {0.2, 0.4, 0.6}. When using scrambling, we tune the scrambling rate over {0.25, 0.5, 0.75}. We also include a bidirectional model (“Bi”) for both LSTMAVG and the GATED RECURRENT AVERAGING NETWORK. We tune over two ways to combine the forward and backward hidden states; the first simply adds them together and the second uses a single feedforward layer with a tanh activation. We try two approaches for model selection. The first, test , is the same as was done in Section 4.1.2, where we use the average Pearson’s r on the 5 2016 STS datasets. The second tunes based on the average Pearson’s r of all 22 datasets in our evaluation. We refer to this as oracle. The results are shown in Table 2. They show that dropping entire word embeddings and scram5We also tried some variations on scrambling that did not yield significant improvements: scrambling after obtaining the negative examples, partially scrambling by performing n swaps where n comes from a Poisson distribution with a tunable λ, and scrambling individual sentences with some probability instead of always scrambling both in the pair. 2082 Model Regularization Oracle 2016 STS AVG none 68.5 68.4 dropout 68.4 68.3 word dropout 68.3 68.3 LSTM none 60.6 59.3 L2 60.3 56.5 dropout 58.1 55.3 word dropout 66.2 65.3 scrambling 66.3 65.1 dropout, scrambling 68.4 68.4 LSTMAVG none 67.7 67.5 dropout, scrambling 69.2 68.6 BiLSTMAVG dropout, scrambling 69.4 68.7 Table 2: Results on SemEval textual similarity datasets (Pearson’s r × 100) when experimenting with different regularization techniques. Model Oracle STS 2016 GRAN (no reg.) 68.0 68.0 GRAN 69.5 68.9 GRAN-2 68.8 68.1 GRAN-3 69.0 67.2 GRAN-4 68.6 68.1 GRAN-5 66.1 64.8 BiGRAN 69.7 68.4 Table 3: Results on SemEval textual similarity datasets (Pearson’s r × 100) for the GRAN architectures. The first row, marked as (no reg.) is the GRAN without any regularization. The other rows show the result of the various GRAN models using dropout and scrambling. bling input sequences is very effective in improving the result of the LSTM, while neither type of dropout improves AVG. Moreover, averaging the hidden states of the LSTM is the most effective modification to the LSTM in improving performance. All of these modifications can be combined to significantly improve the LSTM, finally allowing it to overtake AVG. In Table 3, we compare the various GRAN architectures. We find that the GRAN provides a small improvement over the best LSTM configuration, possibly because of its similarity to AVG. It also outperforms the other GRAN models, despite being the simplest. In Table 4, we show results on all individual STS evaluation datasets after using STS 2016 for model selection (unidirectional models only). The LSTMAVG and GATED RECURRENT AVERAGING NETWORK are more closely correlated in performance, in terms of Spearman’s ρ and Pearson’r r, than either is to AVG. But they do differ significantly in some datasets, most notably in those comparing machine translation output with its refDataset LSTMAVG AVG GRAN MSRpar 49.0 45.9 47.7 MSRvid 84.3 85.1 85.2 SMT-eur 51.2 47.5 49.3 OnWN 71.5 71.2 71.5 SMT-news 68.0 58.2 58.7 STS 2012 Average 64.8 61.6 62.5 headline 77.3 76.9 76.1 OnWN 81.2 72.8 81.4 FNWN 53.2 50.2 55.6 SMT 40.7 38.0 40.3 STS 2013 Average 63.1 59.4 63.4 deft forum 56.6 55.6 55.7 deft news 78.0 78.5 77.1 headline 74.5 75.1 72.8 images 84.7 85.6 85.8 OnWN 84.9 81.4 85.1 tweet news 76.3 78.7 78.7 STS 2014 Average 75.8 75.8 75.9 answers-forums 71.8 70.6 73.1 answers-students 71.1 75.8 72.9 belief 75.3 76.8 78.0 headline 79.5 80.3 78.6 images 85.8 86.0 85.8 STS 2015 Average 76.7 77.9 77.7 2014 SICK 71.3 72.4 72.9 2015 Twitter 52.1 52.1 50.2 Table 4: Results on SemEval textual similarity datasets (Pearson’s r × 100). The highest score in each row is in boldface. erence. Interestingly, both the LSTMAVG and GATED RECURRENT AVERAGING NETWORK significantly outperform AVG in the datasets focused on comparing glosses like OnWN and FNWN. Upon examination, we found that these datasets, especially 2013 OnWN, contain examples of low similarity with high word overlap. For example, the pair ⟨the act of preserving or protecting something., the act of decreasing or reducing something.⟩from 2013 OnWN has a gold similarity score of 0.4. It appears that AVG was fooled by the high amount of word overlap in such pairs, while the other two models were better able to recognize the semantic differences. 4.3 Supervised Text Similarity We also investigate if these techniques can improve LSTM performance on supervised semantic textual similarity tasks. We evaluate on two supervised datasets. For the first, we start with the 20 SemEval STS datasets from 2012-2015 and then use 40% of each dataset for training, 10% for validation, and the remaining 50% for testing. There are 4,481 examples in training, 1,207 in validation, and 6,060 in the test set. The second is the SICK 2014 dataset, using its standard training, validation, and test sets. There are 4,500 sentence pairs 2083 in the training set, 500 in the development set, and 4,927 in the test set. The SICK task is an easier learning problem since the training examples are all drawn from the same distribution, and they are mostly shorter and use simpler language. As these are supervised tasks, the sentence pairs in the training set contain manually-annotated semantic similarity scores. We minimize the loss function6 from Tai et al. (2015). Given a score for a sentence pair in the range [1, K], where K is an integer, with sentence representations hL and hR, and model parameters θ, they first compute: h× = hL ⊙hR, h+ = |hL −hR|, hs = σ  W (×)h× + W (+)h+ + b(h) , ˆpθ = softmax  W (p)hs + b(p) , ˆy = rT ˆpθ, where rT = [1 2 . . . K]. They then define a sparse target distribution p that satisfies y = rT p: pi =      y −⌊y⌋, i = ⌊y⌋+ 1 ⌊y⌋−y + 1, i = ⌊y⌋ 0 otherwise for 1 ≤i ≤K. Then they use the following loss, the regularized KL-divergence between p and ˆpθ: J(θ) = 1 m m X k=1 KL  p(k) ˆp(k) θ  , where m is the number of training pairs. We experiment with the LSTM, LSTMAVG, and AVG models with dropout, word dropout, and scrambling tuning over the same hyperparameter as in Section 4.2. We again regularize the word embeddings back to their initial state, tuning λw over {10−5, 0}. We used the validation set for each respective dataset for model selection. The results are shown in Table 5. The GATED RECURRENT AVERAGING NETWORK has the best performance on both datasets. Dropout helps the word-averaging model in the STS task, unlike in the transfer learning setting. The LSTM benefits slightly from dropout, scrambling, and averaging on their own individually with the exception of word dropout on both datasets and averaging on the SICK dataset. However, when combined, these modifications are able to significantly 6This objective function has been shown to perform very strongly on text similarity tasks, significantly better than squared or absolute error. Model Regularization STS SICK Avg. AVG none 79.2 85.2 82.2 dropout 80.7 84.5 82.6 word dropout 79.3 81.8 80.6 none 68.4 80.9 74.7 dropout 69.6 81.3 75.5 LSTM word dropout 68.0 76.4 72.2 scrambling 74.2 84.4 79.3 dropout, scrambling 75.0 84.2 79.6 LSTMAVG none 69.0 79.5 74.3 dropout 69.2 79.4 74.3 word dropout 65.6 76.1 70.9 scrambling 76.5 83.2 79.9 dropout, scrambling 76.5 84.0 80.3 GRAN none 79.7 85.2 82.5 dropout 79.7 84.6 82.2 word dropout 77.3 83.0 80.2 scrambling 81.4 85.3 83.4 dropout, scrambling 81.6 85.1 83.4 Table 5: Results from supervised training on the STS and SICK datasets (Pearson’s r × 100). The last column is the average result on the two datasets. Model STS SICK Avg. GRAN 81.6 85.3 83.5 GRAN-2 77.4 85.1 81.3 GRAN-3 81.3 85.4 83.4 GRAN-4 80.1 85.5 82.8 GRAN-5 70.9 83.0 77.0 Table 6: Results from supervised training on the STS and SICK datasets (Pearson’s r × 100) for the GRAN architectures. The last column is the average result on the two datasets. improve the performance of the LSTM, bringing it much closer in performance to AVG. This experiment indicates that these modifications when training LSTMs are beneficial outside the transfer learning setting, and can potentially be used to improve performance for the broad range of problems that use LSTMs to model sentences. In Table 6 we compare the various GRAN architectures under the same settings as the previous experiment. We find that the GRAN still has the best overall performance. We also experiment with initializing the supervised models using our pretrained sentence model parameters, for the AVG model (no regularization), LSTMAVG (dropout, scrambling), and GATED RECURRENT AVERAGING NETWORK (dropout, scrambling) models from Table 2 and Table 3. We both initialize and then regularize back to these initial values, referring to this setting as “universal”.7 7In these experiments, we tuned λw over {10, 1, 10−1, 10−2, 10−3, 10−4, 10−5, 10−6, 10−7, 10−8, 0} 2084 # Sentence 1 Sentence 2 LAVG AVG Gold 1 the lamb is looking at the camera. a cat looking at the camera. 3.42 4.13 0.8 2 he also said shockey is “living the dream life of a new york athlete. “jeremy’s a good guy,” barber said, adding:“jeremy is living the dream life of the new york athlete. 3.55 4.22 2.75 3 bloomberg chips in a billion bloomberg gives $1.1 b to university 3.99 3.04 4.0 4 in other regions, the sharia is imposed. in other areas, sharia law is being introduced by force. 4.44 3.72 4.75 5 three men in suits sitting at a table. two women in the kitchen looking at a object. 3.33 2.79 0.0 6 we never got out of it in the first place! where does the money come from in the first place? 4.00 3.33 0.8 7 two birds interacting in the grass. two dogs play with each other outdoors. 3.44 2.81 0.2 Table 7: Illustrative sentence pairs from the STS datasets showing errors made by LSTMAVG and AVG. The last three columns show the gold similarity score, the similarity score of LSTMAVG, and the similarity score of AVG. Boldface indicates smaller error compared to gold scores. Model Regularization STS SICK AVG dropout 80.7 84.5 dropout, universal 82.9 85.6 LSTMAVG dropout, scrambling 76.5 84.0 dropout, scrambling, universal 81.3 85.2 GRAN dropout, scrambling 81.6 85.1 dropout, scrambling, universal 82.7 86.0 Table 8: Impact of initializing and regularizing toward universal models (Pearson’s r×100) in supervised training. The results are shown in Table 8. Initializing and regularizing to the pretrained models significantly improves the performance for all three models, justifying our claim that these models serve a dual purpose: they can be used a black box semantic similarity function, and they possess rich knowledge that can be used to improve the performance of downstream tasks. 5 Analysis 5.1 Error Analysis We analyze the predictions of AVG and the recurrent networks, represented by LSTMAVG, on the 20 STS datasets. We choose LSTMAVG as it correlates slightly less strongly with AVG than the GRAN on the results over all SemEval datasets used for evaluation. We scale the models’ cosine similarities to lie within [0, 5], then compare the predicted similarities of LSTMAVG and AVG to the gold similarities. We analyzed instances in which each model would tend to overestimate or underestimate the gold similarity relative to the other. These are illustrated in Table 7. We find that AVG tends to overestimate the semantic similarity of a sentence pair, relative to LSTMAVG, when the two sentences have a lot of and λc over {10, 1, 10−1, 10−2, 10−3, 10−4, 10−5, 10−6, 0}. word or synonym overlap, but have either important differences in key semantic roles or where one sentence has significantly more content than the other. These phenomena are shown in examples 1 and 2 in Table 7. Conversely, AVG tends to underestimate similarity when there are one-word-tomultiword paraphrases between the two sentences as shown in examples 3 and 4. LSTMAVG tends to overestimate similarity when the two inputs have similar sequences of syntactic categories, but the meanings of the sentences are different (examples 5, 6, and 7). Instances of LSTMAVG underestimating the similarity relative to AVG are relatively rare, and those that we found did not have any systematic patterns. 5.2 GRAN Gate Analysis We also investigate what is learned by the gating function of the GATED RECURRENT AVERAGING NETWORK. We are interested to see whether its estimates of importance correlate with those of traditional syntactic and (shallow) semantic analysis. We use the oracle trained GATED RECURRENT AVERAGING NETWORK from Table 3 and calculate the L1 norm of the gate after embedding 10,000 sentences from English Wikipedia.8 We also automatically tag and parse these sentences using the Stanford dependency parser (Manning et al., 2014). We then compute the average gate L1 norms for particular part-of-speech tags, dependency arc labels, and their conjunction. Table 9 shows the highest/lowest average norm tags and dependency labels. The network prefers nouns, especially proper nouns, as well as cardinal numbers, which is sensible as these are among the most discriminative features of a sentence. Analyzing the dependency relations, we find 8We selected only sentences of less than or equal to 15 tokens to ensure more accurate parsing. 2085 POS Dep. Label top 10 bot. 10 top 10 bot. 10 NNP TO number possessive NNPS WDT nn cop CD POS num det NNS DT acomp auxpass VBG WP appos prep NN IN pobj cc JJ CC vmod mark UH PRP dobj aux VBN EX amod expl JJS WRB conj neg Table 9: POS tags and dependency labels with highest and lowest average GATED RECURRENT AVERAGING NETWORK gate L1 norms. The lists are ordered from highest norm to lowest in the top 10 columns, and lowest to highest in the bottom 10 columns. Dep. Label Weight xcomp 170.6 acomp 167.1 root 157.4 amod 143.1 advmod 121.6 Table 10: Average L1 norms for adjectives (JJ) with selected dependency labels. that nouns in the object position tend to have higher weight than nouns in the subject position. This may relate to topic and focus; the object may be more likely to be the “new” information related by the sentence, which would then make it more likely to be matched by the other sentence in the paraphrase pair. We find that the weights of adjectives depend on their position in the sentence, as shown in Table 10. The highest norms appear when an adjective is an xcomp, acomp, or root; this typically means it is residing in an object-like position in its clause. Adjectives that modify a noun (amod) have Dep. Label Weight pcomp 190.0 amod 178.3 xcomp 176.8 vmod 170.6 root 161.8 auxpass 125.4 prep 121.2 Table 11: Average L1 norms for words with the tag VBG with selected dependency labels. medium weight, and those that modify another adjective or verb (advmod) have low weight. Lastly, we analyze words tagged as VBG, a highly ambiguous tag that can serve many syntactic roles in a sentence. As shown in Table 11, we find that when they are used to modify a noun (amod) or in the object position of a clause (xcomp, pcomp) they have high weight. Medium weight appears when used in verb phrases (root, vmod) and low weight when used as prepositions or auxiliary verbs (prep, auxpass). 6 Conclusion We showed how to modify and regularize LSTMs to improve their performance for learning paraphrastic sentence embeddings in both transfer and supervised settings. We also introduced a new recurrent network, the GATED RECURRENT AVERAGING NETWORK, that improves upon both AVG and LSTMs for these tasks, and we release our code and trained models. Furthermore, we analyzed the different errors produced by AVG and the recurrent methods and found that the recurrent methods were learning composition that wasn’t being captured by AVG. We also investigated the GRAN in order to better understand the compositional phenomena it was learning by analyzing the L1 norm of its gate over various inputs. Future work will explore additional data sources, including from aligning different translations of novels (Barzilay and McKeown, 2001), aligning new articles of the same topic (Dolan et al., 2004), or even possibly using machine translation systems to translate bilingual text into paraphrastic sentence pairs. Our new techniques, combined with the promise of new data sources, offer a great deal of potential for improved universal paraphrastic sentence embeddings. Acknowledgments We thank the anonymous reviewers for their valuable comments. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. We thank the developers of Theano (Theano Development Team, 2016) and NVIDIA Corporation for donating GPUs used in this research. 2086 References Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. Proceedings of SemEval pages 497–511. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity. Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation. Association for Computational Linguistics. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In Proceedings of the International Conference on Learning Representations. Regina Barzilay and Kathleen R McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceedings of the 39th annual meeting on Association for Computational Linguistics. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. William Coster and David Kauchak. 2011. Simple english wikipedia: a new text simplification task. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics. Juri Ganitkevitch and Chris Callison-Burch. 2014. The multilingual paraphrase database. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014). Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of HLT-NAACL. Felix A. Gers, Nicol N. Schraudolph, and J¨urgen Schmidhuber. 2003. Learning precise timing with LSTM recurrent networks. The Journal of Machine Learning Research 3. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8). Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems. Ozan ˙Irsoy and Claire Cardie. 2014. Deep recursive neural networks for compositionality in language. In Advances in Neural Information Processing Systems. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representations. 2087 Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Pengfei Liu, Xipeng Qiu, Xinchi Chen, Shiyu Wu, and Xuanjing Huang. 2015. Multi-timescale long shortterm memory neural network for modelling sentences and documents. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science 34(8). Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2017. Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features. arXiv preprint arXiv:1703.02507 . Nghia The Pham, Germ´an Kruszewski, Angeliki Lazaridou, and Marco Baroni. 2015. Jointly optimizing word representations for lexical and sentential tasks with the c-phrase model. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1). Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688. http://arxiv.org/abs/1605.02688. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016a. Charagram: Embedding words and sentences via character n-grams. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016b. Towards universal paraphrastic sentence embeddings. In Proceedings of the International Conference on Learning Representations. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, and Dan Roth. 2015. From paraphrase database to compositional paraphrase model and back. Transactions of the ACL (TACL) . Wei Xu, Chris Callison-Burch, and William B Dolan. 2015. SemEval-2015 task 1: Paraphrase and semantic similarity in Twitter (PIT). In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval). 2088
2017
190
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2089–2098 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1191 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2089–2098 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1191 Ontology-Aware Token Embeddings for Prepositional Phrase Attachment Pradeep Dasigi1 Waleed Ammar2 Chris Dyer1,3 Eduard Hovy1 1Language Technologies Institute, Carnegie Mellon University, Pittsburgh PA, USA 2Allen Institute for Artificial Intelligence, Seattle WA, USA 3DeepMind, London, UK [email protected], [email protected], [email protected], [email protected] Abstract Type-level word embeddings use the same set of parameters to represent all instances of a word regardless of its context, ignoring the inherent lexical ambiguity in language. Instead, we embed semantic concepts (or synsets) as defined in WordNet and represent a word token in a particular context by estimating a distribution over relevant semantic concepts. We use the new, context-sensitive embeddings in a model for predicting prepositional phrase (PP) attachments and jointly learn the concept embeddings and model parameters. We show that using context-sensitive embeddings improves the accuracy of the PP attachment model by 5.4% absolute points, which amounts to a 34.4% relative reduction in errors. 1 Introduction Type-level word embeddings map a word type (i.e., a surface form) to a dense vector of real numbers such that similar word types have similar embeddings. When pre-trained on a large corpus of unlabeled text, they provide an effective mechanism for generalizing statistical models to words which do not appear in the labeled training data for a downstream task. In accordance with standard terminology, we make the following distinction between types and tokens in this paper: By word types, we mean the surface form of the word, whereas by tokens we mean the instantiation of the surface form in a context. For example, the same word type ‘pool’ occurs as two different tokens in the sentences “He sat by the pool,” and “He played a game of pool.” Most word embedding models define a single vector for each word type. However, a fundamental flaw in this design is their inability to distinguish between different meanings and abstractions of the same word. In the two sentences shown above, the word ‘pool’ has different meanings, but the same representation is typically used for both of them. Similarly, the fact that ‘pool’ and ‘lake’ are both kinds of water bodies is not explicitly incorporated in most type-level embeddings. Furthermore, it has become a standard practice to tune pre-trained word embeddings as model parameters during training for an NLP task (e.g., Chen and Manning, 2014; Lample et al., 2016), potentially allowing the parameters of a frequent word in the labeled training data to drift away from related but rare words in the embedding space. Previous work partially addresses these problems by estimating concept embeddings in WordNet (e.g., Rothe and Sch¨utze, 2015), or improving word representations using information from knowledge graphs (e.g., Faruqui et al., 2015). However, it is still not clear how to use a lexical ontology to derive context-sensitive token embeddings. In this work, we represent a word token in a given context by estimating a context-sensitive probability distribution over relevant concepts in WordNet (Miller, 1995) and use the expected value (i.e., weighted sum) of the concept embeddings as the token representation (see §2). We take a task-centric approach towards doing this, and learn the token representations jointly with the task-specific parameters. In addition to providing context-sensitive token embeddings, the proposed method implicitly regularizes the embeddings of related words by forcing related words to share similar concept embeddings. As a result, the representation of a rare word which does not appear in the training data for a downstream task benefits from all the updates to related words which share one or more concept embeddings. 2089 Figure 1: An example grounding for the word ‘pool’. Solid arrows represent possible senses and dashed arrows represent hypernym relations. Note that the same set of concepts are used to ground the word ‘pool’ regardless of its context. Other WordNet senses for ‘pool’ were removed from the figure for simplicity. Our approach to context-sensitive embeddings assumes the availability of a lexical ontology. While this work relies on WordNet, and we exploit the order of senses given by WordNet, our model is, in principle applicable to any ontology, with appropriate modifications. In this work, we do not assume the inputs are sense tagged. We use the proposed embeddings to predict prepositional phrase (PP) attachments (see §3), a challenging problem which emphasizes the selectional preferences between words in the PP and each of the candidate head words. Our empirical results and detailed analysis (see §4) show that the proposed embeddings effectively use WordNet to improve the accuracy of PP attachment predictions. 2 WordNet-Grounded Context-Sensitive Token Embeddings In this section, we focus on defining our contextsensitive token embeddings. We first describe our grounding of word types using WordNet concepts. Then, we describe our model of contextsensitive token-level embeddings as a weighted sum of WordNet concept embeddings. 2.1 WordNet Grounding We use WordNet to map each word type to a set of synsets, including possible generalizations or abstractions. Among the labeled relations defined in WordNet between different synsets, we focus on the hypernymy relation to help model generalization and selectional preferences between words, which is especially important for predicting PP attachments (Resnik, 1993). To ground a word type, we identify the set of (direct and indirect) hypernyms of the WordNet senses of that word. A simplified grounding of the word ‘pool’ is illustrated in Figure 1. This grounding is key to our model of token embeddings, to be described in the following subsections. 2.2 Context-Sensitive Token Embeddings Our goal is to define a context-sensitive model of token embeddings which can be used as a dropin replacement for traditional type-level word embeddings. Notation. Let Senses(w) be the list of synsets defined as possible word senses of a given word type w in WordNet, and Hypernyms(s) be the list of hypernyms for a synset s.1 For example, according to Figure 1: Senses(pool) = [pond.n.01, pool.n.09], and Hypernyms(pond.n.01) = [pond.n.01, lake.n.01, body of water.n.01, entity.n.01] Each WordNet synset s is associated with a set of parameters vs ∈Rn which represent its embedding. This parameterization is similar to that of Rothe and Sch¨utze (2015). Embedding model. Given a sequence of tokens t and their corresponding word types w, let ui ∈ Rn be the embedding of the word token ti at position i. Unlike most embedding models, the token embeddings ui are not parameters. Rather, ui is computed as the expected value of concept embeddings used to ground the word type wi corresponding to the token ti: ui = X s∈Senses(wi) X s′∈Hypernyms(s) p(s, s′ | t, w, i) vs′ (1) such that X s∈Senses(wi) X s′∈Hypernyms(s) p(s, s′ | t, w, i) = 1 1For notational convenience, we assume that s ∈ Hypernyms(s). 2090 Figure 2: Steps for computing the contextsensitive token embedding for the word ‘pool’, as described in §2.2. The distribution which governs the expectation over synset embeddings factorizes into two components: p(s, s′ | t, w, i) ∝λwi exp−λwi rank(s,wi) × MLP([vs′; context(i, t)]) (2) The first component, λwi exp−λwi rank(s,wi), is a sense prior which reflects the prominence of each word sense for a given word type. Here, we exploit2 the fact that WordNet senses are ordered in descending order of their frequencies, obtained from sense tagged corpora, and parameterize the sense prior like an exponential distribution. rank(s, wi) denotes the rank of sense s for the word type wi, thus rank(s, wi) = 0 corresponds to s being the first sense of wi. The scalar parameter (λwi) controls the decay of the probability mass, which is learned along with the other parameters in the model. Note that sense priors are defined for each word type (wi), and are shared across all tokens which have the same word type. MLP([vs′; context(i, t)]), the second component, is what makes the token representations context-sensitive. It scores each concept in the WordNet grounding of wi by feeding the concatenation of the concept embedding and a dense vec2Note that for ontologies where such information is not available, our method is still applicable but without this component. We show the effect of using a uniform sense prior in §4.2. tor that summarizes the textual context into a multilayer perceptron (MLP) with two tanh layers followed by a softmax layer. This component is inspired by the soft attention often used in neural machine translation (Bahdanau et al., 2014).3 The definition of the context function is dependent on the encoder used to encode the context. We describe a specific instantiation of this function in §3. To summarize, Figure 2 illustrates how to compute the embedding of a word token ti = ‘pool’ in a given context: 1. compute a summary of the context context(i, t), 2. enumerate related concepts for ti, 3. compute p(s, s′ | t, w, i) for each pair (s, s′), and 4. compute ui = E[vs′]. In the following section, we describe our model for predicting PP attachments, including our definition for context. 3 PP Attachment Disambiguating PP attachments is an important and challenging NLP problem. Since modeling hypernymy and selectional preferences is critical for successful prediction of PP attachments (Resnik, 1993), it is a good fit for evaluating our WordNet-grounded context-sensitive embeddings. Figure 3, reproduced from Belinkov et al. (2014), illustrates an example of the PP attachment prediction problem. The accuracy of a competitive English dependency parser at predicting the head word of an ambiguous prepositional phrase is 88.5%, significantly lower than the overall unlabeled attachment accuracy of the same parser (94.2%).4 This section formally defines the problem of PP attachment disambiguation, describes our baseline model, then shows how to integrate the token-level embeddings in the model. 3.1 Problem Definition We follow Belinkov et al. (2014)’s definition of the PP attachment problem. Given a preposition p and 3Although soft attention mechanism is typically used to explicitly represent the importance of each item in a sequence, it can also be applied to non-sequential items. 4See Table 2 in §4 for detailed results. 2091 Figure 3: Two sentences illustrating the importance of lexicalization in PP attachment decisions. In the top sentence, the PP ‘with butter’ attaches to the noun ‘spaghetti’. In the bottom sentence, the PP ‘with chopsticks’ attaches to the verb ‘ate’. Note: This figure and caption have been reproduced from Belinkov et al. (2014). its direct dependent d in the prepositional phrase (PP), our goal is to predict the correct head word for the PP among an ordered list of candidate head words h. Each example in the train, validation, and test sets consists of an input tuple ⟨h, p, d⟩ and an output index k to identify the correct head among the candidates in h. Note that the order of words that form each ⟨h, p, d⟩is the same as that in the corresponding original sentence. 3.2 Model Definition Both our proposed and baseline models for PP attachment use bidirectional RNN with LSTM cells (bi-LSTM) to encode the sequence t = ⟨h1, . . . , hK, p, d⟩. We score each candidate head by feeding the concatenation of the output bi-LSTM vectors for the head hk, the preposition p and the direct dependent d through an MLP, with a fully connected tanh layer to obtain a non-linear projection of the concatenation, followed by a fully-connected softmax layer: p(hkis head) = MLPattach([lstm out(hk); lstm out(p); lstm out(d)]) (3) To train the model, we use cross-entropy loss at the output layer for each candidate head in the training set. At test time, we predict the candidate head with the highest probability according to the model in Eq. 3, i.e., ˆk = arg max k p(hkis head = 1). (4) This model is inspired by the Head-Prep-ChildTernary model of Belinkov et al. (2014). The main difference is that we replace the input features for each token with the output bi-RNN vectors. We now describe the difference between the proposed and the baseline models. Generally, let lstm in(ti) and lstm out(ti) represent the input and output vectors of the bi-LSTM for each token ti ∈ t in the sequence. The outputs at each timestep are obtained by concatenating those of the forward and backward LSTMs. Baseline model. In the baseline model, we use type-level word embeddings to represent the input vector lstm in(ti) for a token ti in the sequence. The word embedding parameters are initialized with pre-trained vectors, then tuned along with the parameters of the bi-LSTM and MLPattach. We call this model LSTM-PP. Proposed model. In the proposed model, we use token level word embedding as described in §2 as the input to the bi-LSTM, i.e., lstm in(ti) = ui. The context used for the attention component is simply the hidden state from the previous timestep. However, since we use a bi-LSTM, the model essentially has two RNNs, and accordingly we have two context vectors, and associated attentions. That is, contextf(i, t) = lstm in(ti−1) for the forward RNN and contextb(i, t) = lstm in(ti+1) for the backward RNN. Consequently, each token gets two representations, one from each RNN. The synset embedding parameters are initialized with pre-trained vectors and tuned along with the sense decay (λw) and MLP parameters from Eq. 2, the parameters of the biLSTM and those of MLPattach. We call this model OntoLSTM-PP. 4 Experiments Dataset and evaluation. We used the English PP attachment dataset created and made available by Belinkov et al. (2014). The training and test splits contain 33,359 and 1951 labeled examples respectively. As explained in §3.1, the input for each example is 1) an ordered list of candidate head words, 2) the preposition, and 3) the direct 2092 dependent of the preposition. The head words are either nouns or verbs and the dependent is always a noun. All examples in this dataset have at least two candidate head words. As discussed in Belinkov et al. (2014), this dataset is a more realistic PP attachment task than the RRR dataset (Ratnaparkhi et al., 1994). The RRR dataset is a binary classification task with exactly two head word candidates in all examples. The context for each example in the RRR dataset is also limited which defeats the purpose of our context-sensitive embeddings. Model specifications and hyperparameters. For efficient implementation, we use mini-batch updates with the same number of senses and hypernyms for all examples, padding zeros and truncating senses and hypernyms as needed. For each word type, we use a maximum of S senses and H indirect hypernyms from WordNet. In our initial experiments on a held-out development set (10% of the training data), we found that values greater than S = 3 and H = 5 did not improve performance. We also used the development set to tune the number of layers in MLPattach separately for the OntoLSTM-PP and LSTM-PP, and the number of layers in the attention MLP in OntoLSTM-PP. When a synset has multiple hypernym paths, we use the shortest one. Finally, words types which do not appear in WordNet are assumed to have one unique sense per word type with no hypernyms. Since the POS tag for each word is included in the dataset, we exclude WordNet synsets which are incompatible with the POS tag. The synset embedding parameters are initialized using the synset vectors obtained by running AutoExtend (Rothe and Sch¨utze, 2015) on 100dimensional GloVe (Pennington et al., 2014) vectors for WordNet 3.1. We refer to this embedding as GloVe-extended. Representation for the OOV word types in LSTM-PP and OOV synset types in OntoLSTM-PP were randomly drawn from a uniform 100-d distribution. Initial sense prior parameters (λw) were also drawn from a uniform 1-d distribution. Baselines. In our experiments, we compare our proposed model, OntoLSTM-PP with three baselines – LSTM-PP initialized with GloVe embedding, LSTM-PP initialized with GloVe vectors retrofitted to WordNet using the approach of Faruqui et al. (2015) (henceforth referred to as GloVe-retro), and finally the best performing standalone PP attachment system from Belinkov et al. (2014), referred to as HPCD (full) in the paper. HPCD (full) is a neural network model that learns to compose the vector representations of each of the candidate heads with those of the preposition and the dependent, and predict attachments. The input representations are enriched using syntactic context information, POS, WordNet and VerbNet (Kipper et al., 2008) information and the distance of the head word from the PP is explicitly encoded in composition architecture. In contrast, we do not use syntactic context, VerbNet and distance information, and do not explicitly encode POS information. 4.1 PP Attachment Results Table 1 shows that our proposed token level embedding scheme OntoLSTM-PP outperforms the better variant of our baseline LSTM-PP (with GloVe-retro intialization) by an absolute accuracy difference of 4.9%, or a relative error reduction of 32%. OntoLSTM-PP also outperforms HPCD (full), the previous best result on this dataset. Initializing the word embeddings with GloVeretro (which uses WordNet as described in Faruqui et al. (2015)) instead of GloVe amounts to a small improvement, compared to the improvements obtained using OntoLSTM-PP. This result illustrates that our approach of dynamically choosing a context sensitive distribution over synsets is a more effective way of making use of WordNet. Effect on dependency parsing. Following Belinkov et al. (2014), we used RBG parser (Lei et al., 2014), and modified it by adding a binary feature indicating the PP attachment predictions from our model. We compare four ways to compute the additional binary features: 1) the predictions of the best standalone system HPCD (full) in Belinkov et al. (2014), 2) the predictions of our baseline model LSTM-PP, 3) the predictions of our improved model OntoLSTM-PP, and 4) the gold labels Oracle PP. Table 2 shows the effect of using the PP attachment predictions as features within a dependency parser. We note there is a relatively small difference in unlabeled attachment accuracy for all dependencies (not only PP attachments), even when gold PP attachments are used as additional features to the parser. However, when gold PP attachment are used, we note a large potential improve2093 System Initialization Embedding Resources Test Acc. HPCD (full) Syntactic-SG Type WordNet, VerbNet 88.7 LSTM-PP GloVe Type 84.3 LSTM-PP GloVe-retro Type WordNet 84.8 OntoLSTM-PP GloVe-extended Token WordNet 89.7 Table 1: Results on Belinkov et al. (2014)’s PPA test set. HPCD (full) is from the original paper, and it uses syntactic SkipGram. GloVe-retro is GloVe vectors retrofitted (Faruqui et al., 2015) to WordNet 3.1, and GloVe-extended refers to the synset embeddings obtained by running AutoExtend (Rothe and Sch¨utze, 2015) on GloVe. System Full UAS PPA Acc. RBG 94.17 88.51 RBG + HPCD (full) 94.19 89.59 RBG + LSTM-PP 94.14 86.35 RBG + OntoLSTM-PP 94.30 90.11 RBG + Oracle PP 94.60 98.97 Table 2: Results from RBG dependency parser with features coming from various PP attachment predictors and oracle attachments. ment of 10.46 points in PP attachment accuracies (between the PPA accuracy for RBG and RBG + Oracle PP), which confirms that adding PP predictions as features is an effective approach. Our proposed model RBG + OntoLSTM-PP recovers 15% of this potential improvement, while RBG + HPCD (full) recovers 10%, which illustrates that PP attachment remains a difficult problem with plenty of room for improvements even when using a dedicated model to predict PP attachments and using its predictions in a dependency parser. We also note that, although we use the same predictions of the HPCD (full) model in Belinkov et al. (2014)5, we report different results than Belinkov et al. (2014). For example, the unlabeled attachment score (UAS) of the baselines RBG and RBG + HPCD (full) are 94.17 and 94.19, respectively, in Table 2, compared to 93.96 and 94.05, respectively, in Belinkov et al. (2014). This is due to the use of different versions of the RBG parser.6 4.2 Analysis In this subsection, we analyze different aspects of our model in order to develop a better understand5The authors kindly provided their predictions for 1942 test examples (out of 1951 examples in the full test set). In Table 2, we use the same subset of 1942 test examples and will include a link to the subset in the final draft. 6We use the latest commit (SHA: e07f74) on the GitHub repository of the RGB parser. ing of its behavior. Effect of context sensitivity and sense priors. We now show some results that indicate the relative strengths of two components of our contextsensitive token embedding model. The second row in Table 3 shows the test accuracy of a system trained without sense priors (that is, making p(s|wi) from Eq. 1 a uniform distribution), and the third row shows the effect of making the token representations context-insensitive by giving a similar attention score to all related concepts, essentially making them type level representations, but still grounded in WordNet. As it can be seen, removing context sensitivity has an adverse effect on the results. This illustrates the importance of the sense priors and the attention mechanism. It is interesting that, even without sense priors and attention, the results with WordNet grounding is still higher than that of the two LSTM-PP systems in Table 1. This result illustrates the regularization behavior of sharing concept embeddings across multiple words, which is especially important for rare words. Effect of training data size. Since OntoLSTMPP uses external information, the gap between the model and LSTM-PP is expected to be more pronounced when the training data sizes are smaller. To test this hypothesis, we trained the two models with different amounts of training data and measured their accuracies on the test set. The plot is shown in Figure 4. As expected, the gap tends to be larger at smaller data sizes. Surprisingly, even with 2000 sentences in the training data set, OntoLSTM-PP outperforms LSTM-PP trained with the full data set. When both the models are trained with the full dataset, LSTM-PP reaches a training accuracy of 95.3%, whereas OntoLSTMPP reaches 93.5%. The fact that LSTM-PP is overfitting the training data more, indicates the regular2094 Figure 4: Effect of training data size on test accuracies of OntoLSTM-PP and LSTM-PP. Model PPA Acc. full 89.7 - sense priors 88.4 - attention 87.5 Table 3: Effect of removing sense priors and context sensitivity (attention) from the model. ization capability of OntoLSTM-PP. Qualitative analysis. To better understand the effect of WordNet grounding, we took a sample of 100 sentences from the test set whose PP attachments were correctly predicted by OntoLSTMPP but not by LSTM-PP. A common pattern observed was that those sentences contained words not seen frequently in the training data. Figure 5 shows two such cases. In both cases, the weights assigned by OntoLSTM-PP to infrequent words are also shown. The word types soapsuds and buoyancy do not occur in the training data, but OntoLSTM-PP was able to leverage the parameters learned for the synsets that contributed to their token representations. Another important observation is that the word type buoyancy has four senses in WordNet (we consider the first three), none of which is the metaphorical sense that is applicable to markets as shown in the example here. Selecting a combination of relevant hypernyms from various senses may have helped OntoLSTM-PP make the right prediction. This shows the value of using hypernymy information from WordNet. Moreover, this indicates the strength of the hybrid nature of the model, that lets it augment ontological information with distributional information. Parameter space. We note that the vocabulary sizes in OntoLSTM-PP and LSTM-PP are comparable as the synset types are shared across word types. In our experiments with the full PP attachment dataset, we learned embeddings for 18k synset types with OntoLSTM-PP and 11k word types with LSTM-PP. Since the biggest contribution to the parameter space comes from the embedding layer, the complexities of both the models are comparable. 5 Related Work This work is related to various lines of research within the NLP community: dealing with synonymy and homonymy in word representations both in the context of distributed embeddings and more traditional vector spaces; hybrid models of distributional and knowledge based semantics; and selectional preferences and their relation with syntactic and semantic relations. The need for going beyond a single vector per word-type has been well established for a while, and many efforts were focused on building multi-prototype vector space models of meaning (Reisinger and Mooney, 2010; Huang et al., 2012; Chen et al., 2014; Jauhar et al., 2015; Neelakantan et al., 2015; Arora et al., 2016, etc.). However, the target of all these approaches is obtaining multisense word vector spaces, either by incorporating sense tagged information or other kinds of external context. The number of vectors learned is still fixed, based on the preset number of senses. In contrast, our focus is on learning a context dependent distribution over those concept representations. Other work not necessarily related to multisense vectors, but still related to our work includes Belanger and Kakade (2015)’s work which proposed a Gaussian linear dynamical system for estimating token-level word embeddings, and Vilnis and McCallum (2015)’s work which proposed mapping each word type to a density instead of a point in a space to account for uncertainty in meaning. These approaches do not make use of lexical ontologies and are not amenable for joint training with a downstream NLP task. Related to the idea of concept embeddings is Rothe and Sch¨utze (2015) who estimated WordNet synset representations, given pre-trained typelevel word embeddings. In contrast, our work focuses on estimating token-level word embeddings as context sensitive distributions of concept em2095 Figure 5: Two examples from the test set where OntoLSTM-PP predicts the head correctly and LSTM-PP does not, along with weights by OntoLSTM-PP to synsets that contribute to token representations of infrequent word types. The prepositions are shown in bold, LSTM-PP’s predictions in red and OntoLSTMPP’s predictions in green. Words that are not candidate heads or dependents are shown in brackets. beddings. There is a large body of work that tried to improve word embeddings using external resources. Yu and Dredze (2014) extended the CBOW model (Mikolov et al., 2013) by adding an extra term in the training objective for generating words conditioned on similar words according to a lexicon. Jauhar et al. (2015) extended the skipgram model (Mikolov et al., 2013) by representing word senses as latent variables in the generation process, and used a structured prior based on the ontology. Faruqui et al. (2015) used belief propagation to update pre-trained word embeddings on a graph that encodes lexical relationships in the ontology. Similarly, Johansson and Pina (2015) improved word embeddings by representing each sense of the word in a way that reflects the topology of the semantic network they belong to, and then representing the words as convex combinations of their senses. In contrast to previous work that was aimed at improving type level word representations, we propose an approach for obtaining context-sensitive embeddings at the token level, while jointly optimizing the model parameters for the NLP task of interest. Resnik (1993) showed the applicability of semantic classes and selectional preferences to resolving syntactic ambiguity. Zapirain et al. (2013) applied models of selectional preferences automatically learned from WordNet and distributional information, to the problem of semantic role labeling. Resnik (1993); Brill and Resnik (1994); Agirre (2008) and others have used WordNet information towards improving prepositional phrase attachment predictions. 6 Conclusion In this paper, we proposed a grounding of lexical items which acknowledges the semantic ambiguity of word types using WordNet and a method to learn a context-sensitive distribution over their representations. We also showed how to integrate the proposed representation with recurrent neural networks for disambiguating prepositional phrase attachments, showing that the proposed WordNetgrounded context-sensitive token embeddings outperforms standard type-level embeddings for predicting PP attachments. We provided a detailed qualitative and quantitative analysis of the proposed model. Implementation and code availability. The models are implemented using Keras (Chollet, 2015), and the functionality is available at https://github.com/pdasigi/ onto-lstm in the form of Keras layers to make it easier to use the proposed embedding model in other NLP problems. 2096 Future work. This approach may be extended to other NLP tasks that can benefit from using encoders that can access WordNet information. WordNet also has some drawbacks, and may not always have sufficient coverage given the task at hand. As we have shown in §4.2, our model can deal with missing WordNet information by augmenting it with distributional information. Moreover, the methods described in this paper can be extended to other kinds of structured knowledge sources like Freebase which may be more suitable for tasks like question answering. Acknowledgements The first author is supported by a fellowship from the Allen Institute for Artificial Intelligence. We would like to thank Matt Gardner, Jayant Krishnamurthy, Julia Hockenmaier, Oren Etzioni, Hector Liu, Filip Ilievski, and anonymous reviewers for their comments. References Eneko Agirre. 2008. Improving parsing and pp attachment performance with sense information. In ACL. Citeseer. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. Linear algebraic structure of word senses, with applications to polysemy. arXiv preprint arXiv:1601.03764 . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. David Belanger and Sham M. Kakade. 2015. A linear dynamical system model for text. In ICML. Yonatan Belinkov, Tao Lei, Regina Barzilay, and Amir Globerson. 2014. Exploring compositional architectures and word vector representations for prepositional phrase attachment. Transactions of the Association for Computational Linguistics 2:561–572. Eric Brill and Philip Resnik. 1994. A rule-based approach to prepositional phrase attachment disambiguation. In Proceedings of the 15th conference on Computational linguistics-Volume 2. Association for Computational Linguistics, pages 1198–1204. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP. pages 740–750. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In EMNLP. pages 1025–1035. Franc¸ois Chollet. 2015. Keras. https://github. com/fchollet/keras. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard H. Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In NAACL. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, pages 873–882. Sujay Kumar Jauhar, Chris Dyer, and Eduard H. Hovy. 2015. Ontologically grounded multi-sense representation learning for semantic vector space models. In NAACL. Richard Johansson and Luis Nieto Pina. 2015. Embedding a semantic network in a word space. In In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics–Human Language Technologies. Citeseer. Karin Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A large-scale classification of english verbs. Language Resources and Evaluation 42(1):21–40. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL. Tao Lei, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In ACL. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. CoRR abs/1310.4546. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39– 41. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2015. Efficient non-parametric estimation of multiple embeddings per word in vector space. arXiv preprint arXiv:1504.06654 . Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Adwait Ratnaparkhi, Jeff Reynar, and Salim Roukos. 1994. A maximum entropy model for prepositional phrase attachment. In Proceedings of the workshop on Human Language Technology. 2097 Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word meaning. In HLT-ACL. Philip Resnik. 1993. Semantic classes and syntactic ambiguity. In Proceedings of the workshop on Human Language Technology. Association for Computational Linguistics. Sascha Rothe and Hinrich Sch¨utze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. In ACL. Luke Vilnis and Andrew McCallum. 2015. Word representations via gaussian embedding. In ICLR. Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In ACL. Benat Zapirain, Eneko Agirre, Lluis Marquez, and Mihai Surdeanu. 2013. Selectional preferences for semantic role classification. Computational Linguistics . 2098
2017
191
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2099–2109 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1192 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2099–2109 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1192 Identifying 1950s American Jazz Musicians: Fine-Grained IsA Extraction via Modifier Composition Ellie Pavlick∗ University of Pennsylvania 3330 Walnut Street Philadelphia, Pennsylvania 19104 [email protected] Marius Pa¸sca Google Inc. 1600 Amphitheatre Parkway Mountain View, California 94043 [email protected] Abstract We present a method for populating fine-grained classes (e.g., “1950s American jazz musicians”) with instances (e.g., Charles Mingus). While stateof-the-art methods tend to treat class labels as single lexical units, the proposed method considers each of the individual modifiers in the class label relative to the head. An evaluation on the task of reconstructing Wikipedia category pages demonstrates a >10 point increase in AUC, over a strong baseline relying on widely-used Hearst patterns. 1 Introduction The majority of approaches (Snow et al., 2006; Shwartz et al., 2016) for extracting IsA relations from text rely on lexical patterns as the primary signal of whether an instance belongs to a class. For example, observing a pattern like “X such as Y” is a strong indication that Y (e.g., “Charles Mingus”) is an instance of class X (e.g., “musician”) (Hearst, 1992). Methods based on these “Hearst patterns” assume that class labels can be treated as atomic lexicalized units. This assumption has several significant weakness. First, in order to recognize an instance of a class, these patternbased methods require that the entire class label be observed verbatim in text. The requirement is reasonable for class labels containing a single word, but in practice, there are many possible fine-grained classes: not only “musicians” but also “1950s American jazz musicians”. The probability that a given label will appear in its entirety within one of the expected patterns is very low, even in large ∗Contributed during an internship at Google. 1950s American jazz musicians . . . seminal musicians such as Charles Mingus and George Russell. . . . . . A virtuoso bassist and composer, Mingus irrevocably changed the face of jazz. . . . . . Mingus truly was a product of America in all its historic complexities. . . . . . Mingus dominated the scene back in the 1950s and 1960s. . . Figure 1: We extract instances of fine-grained classes by considering each of the modifiers in the class label individually. This allows us to extract instances even when the full class label never appears in text. amounts of text. Second, when class labels are treated as though they cannot be decomposed, every class label must be modeled independently, even those containing overlapping words (“American jazz musician”, “French jazz musician”). As a result, the number of meaning representations to be learned is exponential in the length of the class label, and quickly becomes intractable. Thus, compositional models of taxonomic relations are necessary for better language understanding. We introduce a compositional approach for reasoning about fine-grained class labels. Our approach is based on the notion from formal semantics, in which modifiers (“1950s”) correspond to properties that differentiate instances of a subclass (“1950s musicians”) from instances of the superclass (“musicians”) (Heim and Kratzer, 1998). Our method consists of two stages: interpreting each modifier relative to the head (“musicians active during 1950s”), and using the interpretations to identify instances of the class from text (Figure 1). Our main contributions are: 1) a compositional method for IsA extraction, which in2099 volves a novel application of noun-phrase paraphrasing methods to the task of semantic taxonomy induction and 2) the operationalization of a formal semantics framework to address two aspects of semantics that are often kept separate in NLP: assigning intrinsic “meaning” to a phrase, and reasoning about that phrase in a truth-theoretic context. 2 Related Work Noun Phrase Interpretation. Compound noun phrases (“jazz musician”) communicate implicit semantic relations between modifiers and the head. Many efforts to provide semantic interpretations of such phrases rely on matching the compound to pre-defined patterns or semantic ontologies (Fares et al., 2015; ´O S´eaghdha and Copestake, 2007; Tratz and Hovy, 2010; Surtani and Paul, 2015; Choi et al., 2015). Recently, interpretations may take the form of arbitrary natural language predicates (Hendrickx et al., 2013). Most approaches are supervised, comparing unseen noun compounds to the most similar phrase seen in training (Wijaya and Gianfortoni, 2011; Nulty and Costello, 2013; Van de Cruys et al., 2013). Other unsupervised approaches apply information extraction techniques to paraphrase noun compounds (Kim and Nakov, 2011; Xavier and Strube de Lima, 2014; Pa¸sca, 2015). They focus exclusively on providing good paraphrases for an input noun compound. To our knowledge, ours is the first attempt to use these interpretations for the downstream task of IsA relation extraction. IsA Relation Extraction. Most efforts to acquire taxonomic relations from text build on the seminal work of Hearst (1992), which observes that certain textual patterns–e.g., “X and other Y”–are high-precision indicators of whether X is a member of class Y. Recent work focuses on learning such patterns automatically from corpora (Snow et al., 2006; Shwartz et al., 2016). These IsA extraction techniques provide a key step for the more general task of knowledge base population. The “universal schema” approach (Riedel et al., 2013; Kirschnick et al., 2016; Verga et al., 2017), which infers relations using matrix factorization, often includes Hearst patterns as input features. Graphical (Bansal et al., 2014) and joint inference models (Movshovitz-Attias and Cohen, 2015) typically require Hearst patterns to define an inventory of possible classes. A separate line of work avoids Hearst patterns by instead exploiting semi-structured data from HTML markup (Wang and Cohen, 2009; Dalvi et al., 2012; Pasupat and Liang, 2014). These approaches all share the limitation that, in practice, in order for a class to be populated with instances, the entire class label has to have been observed verbatim in text. This requirement limits the ability to handle arbitrarily fine-grained classes. Our work addresses this limitation by modeling fine-grained class labels compositionally. Thus the proposed method can combine evidence from multiple sentences, and can perform IsA extraction without requiring any example instances of a given class.1 Taxonomy Construction. Previous work on the construction of a taxonomy of IsA relations (Flati et al., 2014; de Melo and Weikum, 2010; Kozareva and Hovy, 2010; Ponzetto and Strube, 2007; Ponzetto and Navigli, 2009) considers that task to be different than extracting a flat set of IsA relations from text in practice. Challenges specific to taxonomy construction include overall concept positioning and how to discover whether concepts are unrelated, subordinated or parallel to each other (Kozareva and Hovy, 2010); the need to refine and enrich the taxonomy (Flati et al., 2014); the difficulty in adding relevant IsA relations towards the top of the taxonomy (Ponzetto and Navigli, 2009); eliminating cycles and inconsistencies (Ponzetto and Navigli, 2009; Kozareva and Hovy, 2010). For practical purposes, these challenges are irrelevant when extracting flat IsA relations. Whereas Flati et al. (2014); Bizer et al. (2009); de Melo and Weikum (2010); Nastase and Strube (2013); Ponzetto and Strube (2007); Ponzetto and Navigli (2009); Hoffart et al. (2013) rely on data within human-curated resources, our work operates over unstructured text. Resources constructed in Bizer et al. (2009); Nastase and Strube (2013); Hoffart et al. (2013) contain not just a taxonomy of IsA relations, 1Pasupat and Liang (2014) also focuses on zero-shot IsA extraction, but exploits HTML document structure, rather than reasoning compositionally. 2100 but also relation types other than IsA. 3 Modifiers as Functions Formalization. In formal semantics, modification is modeled as function application. Specifically, let MH be a class label consisting of a head H, which we assume to be a common noun, preceded by a modifier M. We use J·K to represent the “interpretation function” that maps a linguistic expression to its denotation in the world. The interpretation of a common noun is the set of entities2 in the universe U. They are denoted by the noun (Heim and Kratzer, 1998): JHK = {e ∈U | e is a H} (1) The interpretation of a modifier M is a function that maps between sets of entities. That is, modifiers select a subset3 of the input set: JMK(H) = {e ∈H | e satisfies M} (2) This formalization leaves open how one decides whether or not “e satisfies M”. This nontrivial, as the meaning of a modifier can vary depending on the class it is modifying: if e is a “good student”, e is not necessarily a “good person”, making it difficult to model whether “e satisfies good” in general. We therefore reframe the above equation, so that the decision of whether “e satisfies M” is made by calling a binary function φM, parameterized by the class H within which e is being considered: JMK(H) = {e ∈H | φM(H, e)} (3) Conceptually, φM captures the core “meaning” of the modifier M, which is the set of properties that differentiate members of the output class MH from members of the more general input class H. This formal semantics framework has two important consequences. First, the modifier has an intrinsic “meaning”. The properties entailed by the modifier are independent of the particular state of the world. This makes it possible to make inferences about “1950s musician” even if no 2We use “entities” and “instances” interchangeably;“entities” is standard terminology in linguistics. 3As does virtually all previous work in information extraction, we assume that modifiers are subsective, acknowledging the limitations (Kamp and Partee, 1995). 1950s musician have been observed. Second, the modifier is a function that can be applied in a truth-theoretic setting. That is, applying “1950s” to the set of “musicians” returns exactly the set of “1950s musicians”. Computational Approaches. While the notion of modifiers as functions has been incorporated into computational models previously, prior work focuses on either assigning an intrinsic meaning to M or on operationalizing M in a truth-theoretic sense, but not on doing both simultaneously. For example, Young et al. (2014) focuses exclusively on the subset selection aspect of modification. That is, given a set of instances H and a modifier M, their method could return the subset MH. However, their method does not model the meaning of the modifier itself, so that, e.g., if there were no red cars in their model of the world, the phrase “red cars” would have no meaning. In contrast, Baroni and Zamparelli (2010) models the meaning of modifiers explicitly as functions that map between vector-space representations of nouns. However, their model focuses on similarity between class labels–e.g., to say that “important routes” is similar to “major roads”–and it is not obvious how the method could be operationalized in order to identify instances of those classes. A contribution of our work is to model the semantics of M intrinsically, but in a way that permits application in the model theoretic setting. We learn an explicit model of the “meaning” of a modifier M relative to a head H, represented as a distribution over properties that differentiate the members of the class MH from those of the class H. We then use this representation to identify the subset of instances of H, which constitute the subclass MH. 4 Learning Modifier Interpretations 4.1 Setup For each modifier M, we would like to learn the function φM from Eq. 3. Doing so makes it possible, given H and an instance e ∈H, to decide whether e has the properties required to be an instance of MH. In general, there is no systematic way to determine the implied relation between M and H, as modifiers can arguably express any semantic relation, given the right context (Weiskopf, 2101 2007). We therefore model the semantic relation between M and H as a distribution over properties that could potentially define the subclass MH ⊆H. We will refer to this distribution as a “property profile” for M relative to H. We make the assumption that relations between M and H that are discussed more often are more likely to capture the important properties of the subclass MH. This assumption is not perfect (Section 4.4) but has given good results for paraphrasing noun phrases (Nakov and Hearst, 2013; Pa¸sca, 2015). Our method for learning property profiles is based on the unsupervised method proposed by Pa¸sca (2015), which uses query logs as a source of common sense knowledge, and rewrites noun compounds by matching MH (“American musicians”) to queries of the form “H(.∗)M” (“musicians from America”). 4.2 Inputs We assume two inputs: 1) an IsA repository, O, containing ⟨e, C⟩tuples where C is a category and e is an instance of C, and 2) a fact repository, D, containing ⟨s, p, o, w⟩tuples where s and o are noun phrases, p is a predicate, and w is a confidence that p expresses a true relation between s and o. Both O and D are extracted from a sample of around 1 billion Web documents in English. The supplementary material gives additional details. We instantiate O with an IsA repository constructed by applying Hearst patterns to the Web documents. Instances are represented as automatically-disambiguated entity mentions4 which, when possible, are resolved to Wikipedia pages. Classes are represented as (non-disambiguated) natural language strings. We instantiate D with a large repository of facts extracted using in-house implementations of ReVerb (Fader et al., 2011) and OLLIE (Mausam et al., 2012). The predicates are extracted as natural language strings. Subjects and objects may be either disambiguated entity references or natural language strings. Every tuple is included in both the forward and the reverse direction. E.g. ⟨jazz, perform at, venue⟩also appears as ⟨venue, ←perform at, jazz⟩, where ←is a spe4“Entity mentions” may be individuals, like “Barack Obama”, but may also be concepts like “jazz”. cial character signifying inverted predicates. These inverted predicates simplify the following definitions. In total, O contains 1.1M tuples and D contains 30M tuples. 4.3 Building Property Profiles Properties. Let I be a function that takes as input a noun phrase MH and returns a property profile for M relative to H. We define a “property” to be a tuple of a subject, predicate and object in which the subject position5 is a wildcard, e.g. ⟨∗, born in, America⟩. Any instance that fills the wildcard slot then “has” the property. We expand adjectival modifiers to encompass nominalized forms using a nominalization dictionary extracted from WordNet (Miller, 1995). If MH is “American musician” and we require a tuple to have the form ⟨H, p, M, w⟩, we will include tuples in which the third element is either “American” or “America”. Relating M to H Directly. We first build property profiles by taking the predicate and object from any tuple in D in which the subject is the head and the object is the modifier: I1(MH) = {⟨⟨p, M⟩, w⟩| ⟨H, p, M, w⟩∈D} (4) Relating M to an Instance of H. We also consider an extension in which, rather than requiring the subject to be the class label H, we require the subject to be an instance of H. I2(MH) = {⟨⟨p, M⟩, w⟩| ⟨e, H⟩∈O ∧⟨e, p, M, w⟩∈D} (5) Modifier Expansion. In practice, when building property profiles, we do not require that the object of the fact tuple match the modifier exactly, as suggested in Eq. 4 and 5. Instead, we follow Pa¸sca (2015) and take advantage of facts involving distributionally similar modifiers. Specifically, rather than looking only at tuples in D in which the object matches M, we consider all tuples, but discount the weight proportionally to the similarity between M and the object of the tuple. 5Inverse predicates capture properties in which the wildcard is conceptually the object of the relation, but occupies the subject slot in the tuple. For example, ⟨venue, ←perform at, jazz⟩captures that a “jazz venue” is a “venue” e such that “jazz performed at e”. 2102 Good Property Profiles Bad Property Profiles rice dish French violinist Led Zeppelin song still life painter child actor risk manager * serve with rice * live in France Led Zeppelin write * * known for still life * have child * take risk * include rice * born in France Led Zeppelin play * * paint still life * expect child * be at risk * consist of rice * speak French Led Zeppelin have * still life be by * * play child * be aware of risk Table 1: Example property profiles learned by observing predicates that relate instances of class H to modifier M (I2). Results are similar when using the class label H directly (I1). We spell out inverted predicates (Section 4.2) so wildcards (*) may appear as subjects or objects. Thus, I1 is computed as below: I1(MH) = {⟨⟨p, M⟩, w × sim(M, N)⟩ | ⟨H, p, N, w⟩∈D} (6) where sim(M, N) is the cosine similarity between M and N. I2 is computed analogously. We compute sim using a vector space built from Web documents following Lin and Wu (2009); Pantel et al. (2009). We retain the 100 most similar phrases for each of ∼10M phrases, and consider all other similarities to be 0. 4.4 Analysis of Property Profiles Table 1 provides examples of good and bad property profiles for several MHs. In general, frequent relations between M and H capture relevant properties of MH, but it is not always the case. To illustrate, the most frequently discussed relation between “child” and “actor” is that actors have children, but this property is not indicative of the meaning of “child actor”. Qualitatively, the top-ranked interpretations learned by using the head noun directly (I1, Eq. 4) are very similar to those learned using instances of the head (I2, Eq. 5). However, I2 returns many more properties (10 on average per MH) than I1 (just over 1 on average). Anecdotally, we see that I2 captures more specific relations than does I1. For example, for “jazz musicians”, both methods return “* write jazz” and “* compose jazz”, but I2 additionally returns properties like “* be major creative influence in jazz”. We compare I1 and I2 quantitatively in Section 6. Importantly, we do see that both I1 and I2 are capable of learning head-specific property profiles for a modifier. Table 2 provides examples. 5 Class-Instance Identification Instance finding. After finding properties that relate a modifier to a head, we turn to the task of identifying instances of fine-grained Class Label Property Profile American company * based in America American composer * born in America American novel * written in America jazz album * features jazz jazz composer * writes jazz jazz venue jazz performed at * Table 2: Head-specific property profiles learned by relating instances of H to the modifier M (I2). Results are similar using I1. classes. That is, for a given modifier M, we want to instantiate the function φM from Eq. 3. In practice, rather than being a binary function that decides whether or not e is in class MH, our instantiation, ˆφM, will return a realvalued score expressing the confidence that e is a member of MH. For notational convenience, let D(⟨s, p, o⟩) = w, if ⟨s, p, o, w⟩∈D and 0 otherwise. We define ˆφM as follows: ˆφM(H, e) = X ⟨⟨p,o⟩,ω⟩∈I(MH) ω×D(⟨e, p, o⟩) (7) Applying M to H, then, is as in Eq. 3 except that instead of a discrete set, it returns a scored list of candidate instances: JMK(H) = {⟨e, ˆφM(H, e)⟩| ⟨e, H⟩∈O} (8) Ultimately, we need to identify instances of arbitrary class labels, which may contain multiple modifiers. Given a class label C = M1 . . . MkH that contains a head H preceded by modifiers M1 . . . Mk, we generate a list of candidate instances by finding all instances of H that have some property to support every modifier: k\ i=1 {⟨e, s(e)⟩| ⟨e, w⟩∈JMiK(H) ∧w > 0} (9) 2103 where s(e) is the mean6 of the scores assigned by each separate ˆφMi. From here on, we use Mods to refer to our method that generates lists of instances for a class using Eq. 8 and 9. When ˆφM (Eq. 7) is implemented using I1, we use the name ModsH (for “heads”). When it is implemented using I2, we use the name ModsI (for “instances”). Weakly Supervised Reranking. Eq. 8 uses a naive ranking in which the weight for e ∈MH is the product of how often e has been observed with some property and the weight of that property for the class MH. Thus, instances of H with overall higher counts in D receive high weights for every MH. We therefore train a simple logistic regression model to predict the likelihood that e belongs to MH. We use a small set of features7, including the raw weight as computed in Eq. 7. For training, we sample ⟨e, C⟩pairs from our IsA repository O as positive examples and random pairs that were not extracted by any Hearst pattern as negative examples. We frame the task as a binary prediction of whether e ∈C, and use the model’s confidence as the value of ˆφM in place of the function in Eq. 7. 6 Evaluation 6.1 Experimental Setup Evaluation Sets. We evaluate our models on their ability to return correct instances for arbitrary class labels. As a source of evaluation data, we use Wikipedia category pages (e.g., http://en.wikipedia.org/wiki/Category: Pakistani film actresses). These are pages in which the title is the name of the category (“pakistani film actresses”) and the body is a manually curated list of links to other pages that fall under the category. We measure the precision and recall of each method for discovering the instances listed on these pages given the page title (henceforth “class label”). We collect the titles of all Wikipedia category pages, removing those in which the last word is capitalized or which contain fewer than three words. These heuristics are intended to retain compositional titles in which the head is a single common noun. We also remove 6Also tried minimum, but mean gave better results. 7Feature templates in supplementary material. Evaluation Set: Examples of Class Labels UniformSet: 2008 california wildfires · australian army chaplains · australian boy bands · canadian military nurses · canberra urban places · cellular automaton rules · chinese rice dishes · coldplay concert tours · daniel libeskind designs · economic stimulus programs · german film critics · invasive amphibian species · latin political phrases · log flume rides · malayalam short stories · pakistani film actresses · puerto rican sculptors · string theory books WeightedSet: ancient greek physicists · art deco sculptors · audio engineering schools · ballet training methods · bally pinball machines · british rhythmic gymnasts · calgary flames owners · canadian rock climbers · canon l-series lenses · emi classics artists · free password managers · georgetown university publications · grapefruit league venues · liz claiborne subsidiaries · miss usa 2000 delegates · new zealand illustrators · russian art critics Table 3: Examples of class labels from evaluation sets. any titles that contain links to sub-categories. This is to favor fine-grained classes (“pakistani film actresses”) over coarse-grained ones (“film actresses”). We perform heuristic modifier chunking in order to group together multiword modifiers (e.g., “puerto rican”); for details, see supplementary material. From the resulting list of class labels, we draw two samples of 100 labels each, enforcing that no H appear as the head of more than three class labels per sample. The first sample is chosen uniformly at random (denoted UniformSet). The second (WeightedSet) is weighted so that the probability of drawing M1 . . . MkH is proportional to the total number of class labels in which H appears as the head. These different evaluation sets8 are intended to evaluate performance on the head versus the tail of class label distribution, since information retrieval methods often perform differently on different parts of the distribution. On average, there are 17 instances per category in UniformSet and 19 in WeightedSet. Table 3 gives examples of class labels. Baselines. We implement two baselines using our IsA repository (O as defined in Section 4.1). Our simplest baseline ignores modifiers altogether, and simply assumes that any instance of H is an instance of MH, regardless of M. In this case the confidence value for 8Available at http://www.seas.upenn.edu/∼nlp/ resources/finegrained-class-eval.gz 2104 ⟨e, MH⟩is equivalent to that for ⟨e, H⟩. We refer to this baseline simply as Baseline. Our second, stronger baseline uses the IsA repository directly to identify instances of the finegrained class C = M1 . . . MkH. That is, we consider e to be an instance of the class if ⟨e, C⟩∈O, meaning the entire class label appeared in a source sentence matching some Hearst pattern. We refer to this baseline as Hearst. The weight used to rank the candidate instances is the confidence value assigned by the Hearst pattern extraction (Section 4.2). Compositional Models. As a baseline compositional model, we augment the Hearst baseline via set intersection. Specifically, for a class C = M1 . . . MkH, if each of the MiH appears in O independently, we take the instances of C to be the intersection of the instances of each of the MiH. We assign the weight of an instance e to be the sum of the weights associated with each independent modifier. We refer to this method as Hearst∩. It is roughly equivalent to (Pa¸sca, 2014). We contrast it with our proposed model, which recognizes instances of a fine-grained class by 1) assigning a meaning to each modifier in the form of a property profile and 2) checking whether a candidate instance exhibits these properties. We refer to the versions of our method as ModsH and ModsI, as described in Section 5. When relevant, we use “raw” to refer to the version in which instances are ranked using raw weights and “RR” to refer to the version in which instances are ranked using logistic regression (Section 5). We also try using the proposed methods to extend rather than replace the Hearst baseline. We combine predictions by merging the ranked lists produced by each system: i.e. the score of an instance is the inverse of the sum of its ranks in each of the input lists. If an instance does not appear at all in an input list, its rank in that list is set to a large constant value. We refer to these combination systems as Hearst+ModsH and Hearst+ModsI. 6.2 Results Precision and Coverage. We first compare the methods in terms of their coverage, the number of class labels for which the method is able to find some instance, and their precision, to what extent the method is able to correctly rank true instances of the class above non-instances. We report total coverage, the number of labels for which the method returns any instance, and correct coverage, the number of labels for which the method returns a correct instance. For precision, we compute the average precision (AP) for each class label. AP ranges from 0 to 1, where 1 indicates that all positive instances were ranked above all negative instances. We report mean average precision (MAP), which is the mean of the APs across all the class labels. MAP is only computed over class labels for which the method returns something, meaning methods are not punished for returning empty lists. Table 4 gives examples of instances returned for several class labels and Table 5 shows the precision and coverage for each of the methods. Figure 2 illustrates how the single mean AP score (as reported in Table 5) can misrepresent the relative precision of different methods. In combination, Table 5 and Figure 2 demonstrate that the proposed methods extract instances about as well as the baseline, whenever the baseline can extract anything at all; i.e. the proposed method does not cause a precision drop on classes covered by the baseline. In addition, there are many classes for which the baseline is not able to extract any instances, but the proposed method is. None of the methods can extract some of the gold instances, such as “Dictator perpetuo” and “Furor Teutonicus” of the gold class “latin political phrases”. Table 5 also reveals that the reranking model (RR) consistently increases MAP for the proposed methods. Therefore, going forward, we only report results using the reranking model (i.e. ModsH and ModsI will refer to ModsH RR and ModsI RR, respectively). Manual Re-Annotation. It possible that true instances of a class are missing from our Wikipedia reference set, and thus that our precision scores underestimate the actual precision of the systems. We therefore manually verify the top 10 predictions of each of the systems for a random sample of 25 class labels. We choose class labels for which Hearst was able to return at least one instance, in order to ensure reliable precision 2105 Flemish still life painters: Clara Peeters · Willem Kalf · Jan Davidsz de Heem · Pieter Claesz · Peter Paul Rubens · Frans Snyders · Jan Brueghel the Elder · Hans Memling · Pieter Bruegel the Elder · Caravaggio · Abraham Brueghel Pakistani cricket captains: Salman Butt · Shahid Afridi · Javed Miandad · Azhar Ali · Greg Chappell · Younis Khan · Wasim Akram · Imran Khan · Mohammad Hafeez · Rameez Raja · Abdul Hafeez Kardar · Waqar Younis · Sarfraz Ahmed Thai buddhist temples: Wat Buddhapadipa · Wat Chayamangkalaram · Wat Mongkolratanaram · Angkor Wat · Preah Vihear Temple · Wat Phra Kaew · Wat Rong Khun · Wat Mahathat Yuwaratrangsarit · Vat Phou · Tiger Temple · Sanctuary of Truth · Wat Chalong · Swayambhunath · Mahabodhi Temple · Tiger Cave Temple · Harmandir Sahib Table 4: Instances extracted for several fine-grained classes from Wikipedia. Lists shown are from ModsI. Instances in italics were also returned by Hearst∩. Strikethrough denotes incorrect. UniformSet WeightedSet Coverage MAP Coverage MAP Baseline 95 / 70 0.01 98 / 74 0.01 Hearst 9 / 9 0.63 8 / 8 0.80 Hearst∩ 13 / 12 0.62 9 / 9 0.80 ModsH raw 56 / 32 0.23 50 / 30 0.16 ModsH RR 56 / 32 0.29 50 / 30 0.25 ModsI raw 62 / 36 0.18 59 / 38 0.20 ModsI RR 62 / 36 0.24 59 / 38 0.23 Table 5: Coverage and precision for populating Wikipedia category pages with instances. “Coverage” is the number of class labels (out of 100) for which at least one instance was returned, followed by the number for which at least one correct instance was returned. “MAP” is mean average precision. MAP does not punish methods for returning empty lists, thus favoring the baseline (see Figure 2). Figure 2: Distribution of AP over 100 class labels in WeightedSet. The proposed method (red) and the baseline method (blue) achieve high AP for the same number of classes, but ModsI additionally finds instances for classes for which the baseline returns nothing. estimates. For each of these labels, we manually check the top 10 instances proposed by each method to determine whether each belongs to the class. Table 6 shows the precision scores for each method computed against the original Wikipedia list of instances and against our manually-augmented list of gold instances. The overall ordering of the systems does not change, but the precision scores increase notably after re-annotation. We continue to evaluate against the Wikipedia lists, but acknowledge that reported precision is likely an underestimate of true precision. Wikipedia Gold Hearst 0.56 0.79 Hearst∩ 0.53 0.78 ModsH 0.23 0.39 ModsI 0.24 0.42 Hearst+ModsH 0.43 0.63 Hearst+ModsI 0.43 0.63 Table 6: P@10 before vs. after re-annotation; Wikipedia underestimates true precision. UniformSet WeightedSet AUC Recall AUC Recall Baseline 0.55 0.23 0.53 0.28 Hearst 0.56 0.03 0.52 0.02 Hearst∩ 0.57 0.04 0.53 0.02 ModsH 0.68 0.08 0.60 0.06 ModsI 0.71 0.09 0.65 0.09 Hearst∩+ModsH 0.70 0.09 0.61 0.08 Hearst∩+ModsI 0.73 0.10 0.66 0.10 Table 7: Recall of instances on Wikipedia category pages, measured against the full set of instances from all pages in sample. AUC captures tradeoffbetween true and false positives. 2106 (a) Uniform random sample (UniformSet). (b) Weighted random sample (WeightedSet). Figure 3: ROC curves for selected methods (Hearst in blue, proposed in red). Given a ranked list of instances, ROC curves plot true positives vs. false positives retained by setting various cutoffs. The curve becomes linear once all remaining instances have the same score (e.g., 0), as this makes it impossible to add true positives without also including all remaining false positives. Precision-Recall Analysis. We next look at the precision-recall tradeoffin terms of the area under the curve (AUC) when each method attempts to rank the complete list of candidate instances. We take the union of all of the instances proposed by all of the methods (including the Baseline method which, given a class label M0 . . . MkH, proposes every instance of the head H as a candidate). Then, for each method, we rank this full set of candidates such that any instance returned by the method is given the score the method assigns, and every other instance is scored as 0. Table 7 reports the AUC and recall. Figure 3 plots the full ROC curves. The requirement by Hearst that class labels appear in full in a single sentence results in very low recall, which translates into very low AUC when considering the full set of candidate instances. In comparison, the proposed compositional methods make use of a larger set of sentences, and provide nonzero scores for many more candidates, resulting in a >10 point increase in AUC on both UniformSet and WeightedSet (Table 7). 7 Conclusion We have presented an approach to IsA extraction that takes advantage of the compositionality of natural language. Existing approaches often treat class labels as atomic units that must be observed in full in order to be populated with instances. As a result, current methods are not able to handle the infinite number of classes describable in natural language, most of which never appear in text. Our method reasons about each modifier in the label individually, in terms of the properties that it implies about the instances. This approach allows us to harness information that is spread across multiple sentences, significantly increasing the number of fine-grained classes that we are able to populate. Acknowledgments The paper incorporates suggestions on an earlier version from Susanne Riehemann. Ryan Doherty offered support in refining and accessing the fact repository used in the evaluation. References M. Bansal, D. Burkett, G. de Melo, and D. Klein. 2014. Structured learning for taxonomy induction with belief propagation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL-14). Baltimore, Maryland, pages 1041–1051. M. Baroni and R. Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP-10). Cambridge, Massachusetts, pages 1183–1193. 2107 C. Bizer, J. Lehmann, G. Kobilarov, S. Auer, C. Becker, R. Cyganiak, and S. Hellmann. 2009. DBpedia - a crystallization point for the Web of data. Journal of Web Semantics 7(3):154–165. E. Choi, T. Kwiatkowski, and L. Zettlemoyer. 2015. Scalable semantic parsing with partial ontologies. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL-15). Beijing, China, pages 1311–1320. B. Dalvi, W. Cohen, and J. Callan. 2012. Websets: Extracting sets of entities from the Web using unsupervised information extraction. In Proceedings of the 5th ACM Conference on Web Search and Data Mining (WSDM-12). Seattle, Washington, pages 243–252. G. de Melo and G. Weikum. 2010. MENTA: Inducing multilingual taxonomies from Wikipedia. In Proceedings of the 19th International Conference on Information and Knowledge Management (CIKM-10). Toronto, Canada, pages 1099–1108. A. Fader, S. Soderland, and O. Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP-11). Edinburgh, Scotland, pages 1535–1545. M. Fares, S. Oepen, and E. Velldal. 2015. Identifying compounds: On the role of syntax. In International Workshop on Treebanks and Linguistic Theories (TLT-14). Warsaw, Poland, pages 273–283. T. Flati, D. Vannella, T. Pasini, and R. Navigli. 2014. Two is bigger (and better) than one: the Wikipedia Bitaxonomy project. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL-14). Baltimore, Maryland, pages 945–955. M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th Conference on Computational Linguistics (COLING-92). pages 539–545. I. Heim and A. Kratzer. 1998. Semantics in Generative Grammar, volume 13. Blackwell Oxford. I. Hendrickx, Z. Kozareva, P. Nakov, D. ´O S´eaghdha, S. Szpakowicz, and T. Veale. 2013. SemEval-2013 task 4: Free paraphrases of noun compounds. In Proceedings of Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval-13). pages 138–143. J. Hoffart, F. Suchanek, K. Berberich, and G. Weikum. 2013. YAGO2: a spatially and temporally enhanced knowledge base from Wikipedia. Artificial Intelligence Journal. Special Issue on Artificial Intelligence, Wikipedia and Semi-Structured Resources 194:28–61. H. Kamp and B. Partee. 1995. Prototype theory and compositionality. Cognition 57(2):129–191. N. Kim and P. Nakov. 2011. Large-scale noun compound interpretation using bootstrapping and the Web as a corpus. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP-11). Edinburgh, Scotland, pages 648–658. J. Kirschnick, H. Hemsen, and V. Markl. 2016. Jedi: Joint entity and relation detection using type inference. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-16) - System Demonstrations. Berlin, Germany, pages 61–66. Z. Kozareva and E. Hovy. 2010. A semi-supervised method to learn and construct taxonomies using the web. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP-10). Cambridge, Massachusetts, pages 1110–1118. D. Lin and X. Wu. 2009. Phrase clustering for discriminative learning. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP-09). Singapore, pages 1030–1038. Mausam, M. Schmitz, S. Soderland, R. Bart, and O. Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL-12). Jeju Island, Korea, pages 523–534. G. Miller. 1995. WordNet: a lexical database. Communications of the ACM 38(11):39–41. D. Movshovitz-Attias and W. Cohen. 2015. Kblda: Jointly learning a knowledge base of hierarchy, relations, and facts. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP-15). Beijing, China, pages 1449–1459. P. Nakov and M. Hearst. 2013. Semantic interpretation of noun compounds using verbal and other paraphrases. ACM Transactions on Speech and Language Processing 10(3):1–51. V. Nastase and M. Strube. 2013. Transforming Wikipedia into a large scale multilingual concept network. Artificial Intelligence 194:62–85. P. Nulty and F. Costello. 2013. General and specific paraphrases of semantic relations between nouns. Natural Language Engineering 19(03):357–384. 2108 D. ´O S´eaghdha and A. Copestake. 2007. Cooccurrence contexts for noun compound interpretation. In Proceedings of the Workshop on a Broader Perspective on Multiword Expressions. Prague, Czech Republic, pages 57–64. M. Pa¸sca. 2014. Acquisition of open-domain classes via intersective semantics. In Proceedings of the 23rd World Wide Web Conference (WWW-14). Seoul, Korea, pages 551–562. M. Pa¸sca. 2015. Interpreting compound noun phrases using web search queries. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-15). Denver, Colorado, pages 335–344. P. Pantel, E. Crestan, A. Borkovsky, A. Popescu, and V. Vyas. 2009. Web-scale distributional similarity and entity set expansion. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP09). Singapore, pages 938–947. P. Pasupat and P. Liang. 2014. Zero-shot entity extraction from Web pages. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL-14). Baltimore, Maryland, pages 391–401. S. Ponzetto and R. Navigli. 2009. Large-scale taxonomy mapping for restructuring and integrating Wikipedia. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI-09). Pasadena, California, pages 2083–2088. S. Ponzetto and M. Strube. 2007. Deriving a large scale taxonomy from Wikipedia. In Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI-07). Vancouver, British Columbia, pages 1440–1447. S. Riedel, L. Yao, A. McCallum, and B. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Association for Computational Linguistics (NAACL-HLT-13). Atlanta, Georgia, pages 74– 84. V. Shwartz, Y. Goldberg, and I. Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL16). Berlin, Germany, pages 2389–2398. R. Snow, D. Jurafsky, and A. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL-06). Sydney, Australia, pages 801–808. N. Surtani and S. Paul. 2015. A vsm-based statistical model for the semantic relation interpretation of noun-modifier pairs. Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP-15) pages 636–645. S. Tratz and E. Hovy. 2010. A taxonomy, dataset, and classifier for automatic noun compound interpretation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10). Uppsala, Sweden, pages 678–687. T. Van de Cruys, S. Afantenos, and P. Muller. 2013. MELODI: A supervised distributional approach for free paraphrasing of noun compounds. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval13). Atlanta, Georgia, pages 144–147. P. Verga, A. Neelakantan, and A. McCallum. 2017. Generalizing to unseen entities and entity pairs with row-less universal schema. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL-17). Valencia, Spain, pages 613–622. R. Wang and W. Cohen. 2009. Automatic set instance extraction using the Web. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP09). Singapore, pages 441–449. D. Weiskopf. 2007. Compound nominals, context, and compositionality. Synthese 156(1):161–204. D. Wijaya and P. Gianfortoni. 2011. Nut case: What does it mean?: Understanding semantic relationship between nouns in noun compounds through paraphrasing and ranking the paraphrases. In Proceedings of the 1st International Workshop on Search and Mining EntityRelationship Data (SMER-11). Glasgow, United Kingdom, pages 9–14. C. Xavier and V. Strube de Lima. 2014. Boosting open information extraction with noun-based relations. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC-14). Reykjavik, Iceland, pages 96– 100. P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics (TACL) 2:67–78. 2109
2017
192
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2110–2120 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1193 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2110–2120 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1193 Parsing to 1-Endpoint-Crossing, Pagenumber-2 Graphs Junjie Cao∗, Sheng Huang∗, Weiwei Sun and Xiaojun Wan Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {junjie.cao,huangsheng,ws,wanxiaojun}@pku.edu.cn Abstract We study the Maximum Subgraph problem in deep dependency parsing. We consider two restrictions to deep dependency graphs: (a) 1-endpoint-crossing and (b) pagenumber-2. Our main contribution is an exact algorithm that obtains maximum subgraphs satisfying both restrictions simultaneously in time O(n5). Moreover, ignoring one linguistically-rare structure descreases the complexity to O(n4). We also extend our quartic-time algorithm into a practical parser with a discriminative disambiguation model and evaluate its performance on four linguistic data sets used in semantic dependency parsing. 1 Introduction Dependency parsing has long been studied as a central issue in developing syntactic or semantic analysis. Recently, some linguistic projects grounded on deep grammar formalisms, including CCG, LFG, and HPSG, draw attentions to rich syntactic and semantic dependency annotations that are not limited to trees (Hockenmaier and Steedman, 2007; Sun et al., 2014; Ivanova et al., 2012). Parsing for these deep dependency representations can be viewed as the search for Maximum Subgraphs (Kuhlmann and Jonsson, 2015). This is a natural extension of the Maximum Spanning Tree (MST) perspective (McDonald et al., 2005) for dependency tree parisng. One main challenge of the Maximum Subgraph perspective is to design tracTable algorithms for certain graph classes that have good empirical coverage for linguistic annotations. Unfortunately, no previously defined class simultaneously has high ∗The first two authors contribute equally. coverage and low-degree polynomial parsing algorithms. For example, noncrossing dependency graphs can be found in time O(n3), but cover only 48.23% of sentences in CCGBank (Kuhlmann and Jonsson, 2015). We study two well-motivated restrictions to deep dependency graphs: (a) 1-endpoint-crossing (1EC hereafter; Pitler et al., 2013) and (b) pagenumber is less than or equal to 2 (P2 hereafter; Kuhlmann and Jonsson, 2015). We will show that if the output dependency graphs are restricted to satisfy both restrictions, the Maximum Subgraph problem can be solved using dynamic programming in time O(n5). Moreover, if we ignore one linguistically-rare sub-problem, we can reduce the time complexity to O(n4). Though this new algorithm is a degenerated one, it has the same empirical coverage for various deep dependency annotations. We evaluate the coverage of our algorithms on four linguistic data sets: CCGBank, DeepBank, Enju HPSGBank and Prague Dependency TreeBank. They cover 95.68%, 97.67%, 97.28% and 97.53% of dependency graphs in the four corpora. The relatively satisfactory coverage makes it possible to parse with high accuracy. Based on the quartic-time algorithm, we implement a parser with a discriminative disambiguation model. Our new parser can be taken as a graph-based parser which is complementary to transition-based (Henderson et al., 2013; Zhang et al., 2016) and factorization-based (Martins and Almeida, 2014; Du et al., 2015a) systems. We evaluate our parser on four data sets: those used in SemEval 2014 Task 8 (Oepen et al., 2014), and the dependency graphs extracted from CCGbank (Hockenmaier and Steedman, 2007). Evaluations indicate that our parser produces very accurate deep dependency analysis. It reaches state-of-the-art results on average produced by a transition-based system of Zhang et al. 2110 (2016) and factorization-based systems (Martins and Almeida, 2014; Du et al., 2015a). The implementation of our parser is available at http://www.icst.pku.edu.cn/ lcwm/grass. 2 Background Dependency parsing is the task of mapping a natural language sentence into a dependency graph. Previous work on dependency parsing mainly focused on tree-shaped representations. Recently, it is shown that data-driven parsing techniques are also applicable to generate more flexible deep dependency graphs (Du et al., 2014; Martins and Almeida, 2014; Du et al., 2015b,a; Zhang et al., 2016; Sun et al., 2017). Parsing for deep dependency representations can be viewed as the search for Maximum Subgraphs for a certain graph class G (Kuhlmann and Jonsson, 2015), a generalization of the MST perspective for tree parsing. In particular, we have the following optimization problem: Given an arc-weighted graph G = (V, A), find a subgraph G′ = (V, A′ ⊆A) with maximum total weight such that G′ belongs to G. The choice of G determines the computational complexity of dependency parsing. For example, if G is the set of projective trees, the problem can be solved in time O(|V |3), and if G is the set of noncrossing dependency graphs, the complexity is O(|V |3). Unfortunately, no previously defined class simultaneously has high coverage on deep dependency annotations and low-degree polynomial decoding algorithms for practical parsing. In this paper, we study well-motivated restrictions: 1EC (Pitler et al., 2013) and P2 (Kuhlmann and Jonsson, 2015). We will show that relatively satisfactory coverage and parsing complexity can be obtained for graphs that satisfy both restrictions. 3 The 1EC, P2 Graphs 3.1 The 1EC Restriction Pitler et al. (2013) introduced a very nice property for modelling non-projective dependency trees, i.e. 1EC. This property not only covers a large amount of tree annotations in natural language treebanks, but also allows the corresponding MST problem to bo solved in time of O(n4). The formal description of the 1EC property is adopted from (Pitler et al., 2013). Definition 1. Edges e1 and e2 cross if e1 and e2 have distinct endpoints and exactly one of the endpoints of e1 lies between the endpoints of e2. Definition 2. A dependency graph is 1-EndpointCrossing if for any edge e, all edges that cross e share an endpoint p. Given a sentence s = w0w1 · · · wn−1 of length n, the vertices, i.e. words, are indexed with integers, an arc from wi to wj as a(i,j), and the common endpoint, namely pencil point, of all edges crossed with a(i,j) or a(j,i) as pt(i, j). We denote an edge as e(i,j), if we do not consider its direction. 3.2 The P2 Restriction The term pagenumber is referred to as planar by some other authors, e.g. (Titov et al., 2009; G´omez-Rodr´ıguez and Nivre, 2010; Pitler et al., 2013). We give the definition of related concepts as follows. Definition 3. A book is a particular kind of topological space that consists of a single line called the spine, together with a collection of one or more half-planes, called the pages, each having the spine as its boundary. Definition 4. A book embedding of a finite graph G onto a book B satisfies three conditions: (1) every vertex of G is drawn as a point on the spine of B; (2) every edge of G is drawn as a curve that lies within a single page of B; (3) every page of B does not have any edge crossings. Empirically, a deep dependency graph is not very dense and can typically be embedded onto a very thin book. To measure the thickness of a graph, we can use its pagenumber. Definition 5. The book pagenumber of G is the minimum number of pages required for a book embedding of G. For sake of concision, we say a graph is “pagenumber-k”, meaning that the pagenumber is at most k. Theorem 1. The pagenumber of 1EC graph may be greater than 2. Proof. The graph in Figure 1 gives an instance which is 1EC but the pagenumber of which is 3. There is a cycle, namely a →c →e →b →d → a, consisting of odd number of edges. Pitler et al. (2013) proved that 1EC trees are a subclass of graphs whose pagenumber is at most 2. This property provides the foundation to the 2111 PN≤2 1EC EnjuBank DeepBank PCEDT CCGBank Yes Both 32236 (99.53%) 32287 (99.69%) 31866 (98.39%) 38848 (98.09%) Both Yes 31507 (97.28%) 31634 (97.67%) 31589 (97.53%) 37913 (95.73%) Yes Yes 31507 (97.28%) 31634 (97.67%) 31589 (97.53%) 37894 (95.68%) No Yes 0 (0.0%) 0 (0.0%) 0 (0.0%) 19 (0.05%) Yes No 729 (2.25%) 653 (2.02%) 277 (0.86%) 954 (2.41%) Sentences 32389 32389 32389 39604 Table 1: Coverage in terms of complete graphs under various structural restrictions. Column “PN≤2” indicates whether the restriction “P2” is satisfied; Column “1EC” indicates whether the restriction “1EC” is satisfied. .a . b . c . d . e Figure 1: A 1EC graph whose pagenumber is 3. success in designing dynamic programming algorithms for trees. Theorem 1 indicates that when we consider more general graph, the case is more complicated. In this paper, we study graphs that are constrained to be both 1EC and P2. We call them 1EC/P2 graphs. 3.3 Coverage on Linguistic Data To show that the two restrictions above are wellmotivated for describing linguistic data, we evaluate their empirical coverage on four deep dependency corpora (as defined in Section 5.2). These corpora are also used for training and evaluating our data-driven parsers. The coverage is evaluated using sentences in the training sets. Table 1 shows the results. We can see that 1EC is also an empirical well-motivated restriction when it comes to deep dependency structures. The P2 property has an even better coverage. Unfortunately, it is a NP-hard problem to find optimal P2 graphs (Kuhlmann and Jonsson, 2015). Though theoretically a 1EC graph is not necessarily P2, the empirical evaluation demonstrates the high overlap of them on linguistic annotations. In particular, almost all 1EC deep dependency graphs are P2. The percentages of graphs satisfying both restrictions vary between 95.68% for CCGBank and 97.67% for DeepBank. The relatively satisfactory coverage enables accurate practical parsing. 4 The Algorithm This section contains the main contribution of this paper: a polynomial time exact algorithm for solving the Maximum Subgraph problem for the class of 1EC/P2 graphs. Theorem 2. Take 1EC/P2 graphs as target subgraphs, the maximum subgraph problem can be solved in time O(|V |5). For sake of formal concision, we introduce the algorithm of which the goal is to calculate the maximum score of a subgraph. Extracting corresponding optimal graphs can be done in a number of ways. For example, we can maintain an auxiliary arc table which is populated parallel to the procedure of obtaining maximum scores. Our algorithm is highly related to the following property: Every subgraph of a 1EC/P2 graph is also a 1EC/P2 graph. We therefore focus on maximal 1EC/P2 graphs, a particular type of 1EC/P2 graphs defined as follows. Definition 6. A maximal 1EC/P2 graph is a 1EC/P2 graph that cannot be extended by including one more edge. Our algorithm is a bottom-up dynamic programming algorithm. It defines different structures corresponding to different sub-problems, and visits all structures from bottom to top, finding the best combination of smaller structures to form a new structure. The key design is to make sure that it can produce all maximal 1EC/P2 graphs. During the search for maximal 1EC/P2 graphs, we can freely delete bad edges whose scores are negative. In particular, we figure out some edges, in each construction step, which can be created without violating either 1EC or P2 restriction. Assume the arc weight associated with a(i,j) is w[i, j]. Then we define a function SELECT(i, j) according to the comparison of 0 and w[i, j] as well as w[j, i]. If w[i, j] ≥0 (or w[j, i] ≥0), we then select a(i,j) (or a(j,i)) and add it to currently the best solution of a sub-problem. SELECT(i, j) returns max(max(0, w[i, j]) + max(0, w[j, i])). If we allow at most one arc between two nodes, SELECT(i, j) returns max(0, w[i, j], w[j, i]). 2112 . Int[i, j] . i . j . L[i, j, x] . x . i . j . R[i, j, x] . x . i . j . LR[i, j, x] . x . i . j . N[i, j, x] . x . i . j . C[x, i, a, b](b < a) .x. i . b . a . C[x, i, a, b](a < b) . x . i . a . b Figure 2: Graphic representations of sub-problems. The graphical illustration of our algorithm uses undirected graphs1. In other words, we use e(i,j) to include the discussion about both a(i,j) and a(j,i). 4.1 Sub-problems We consider six sub-problems when we construct a maximum dependency graph on a given (closed) interval [i, k] ⊆V of vertices. When we focus on the nodes strictly inside this interval, and we use an open interval (i, k) to exclude i and j. See Figure 2 for graphical visualization. The first five are adapted in concord with Pitler et al. (2013)’s solution for trees, and we introduce a new sub-problem, namely C. Because graphs allow for loops as well as disconnectedness, the subproblems are simplified to some extent, while a special case of LR is now prominent. C is thus introduced to represent the special case. The subproblems are explained as follows. Int Int[i, j] represents a partial analysis associated with an interval from i to j inclusively. Int[i, j] may or may not contain edge e(i,j). To parse a given sentence is equivalent to solve the problem Int[0, n −1]. L L[i, j, x] represents a partial analysis associated with an interval from i to j inclusively as well as an external vertex x. ∀p ∈ (i, j), pt(x, p) = i. L[i, j, x] can contain e(i,j) but disallows e(x,i) or e(x,j). R R[i, j, x] represents a partial analysis associated with an interval from i to j inclusively as well as an external vertex x. ∀p ∈ (i, j), pt(x, p) = j. R[i, j, x] can contain e(i,j) but disallows e(x,i) or e(x,j). 1 The single-head property does not hold. We currently do not consider other constraints of directions. So prediction of the direction of one edge does not affect prediction of other edges as well as their directions. The directions can be assigned locally, and our parser builds directed rather than undirected graphs in this way. Undirected graphs are only used to conveniently illustrate our algorithms. All experimental results in Section 5.2 consider directed dependencies in a standard way. We use the official evaluation tool provided by SDP2014 shared task. The numberic results reported in this paper are directly comparable to results in other papers. LR LR[i, j, x] represents a partial analysis associated with an interval from i to j inclusively as well as an external vertex x. ∀p ∈ (i, j), pt(x, p) = i or j. LR[i, j, x] must allow e(i,j) but disallows e(x,i) or e(x,j). N N[i, j, x] represents a partial analysis associated with an interval from i to j inclusively and an external vertex x. ∀p ∈ (i, j), pt(x, p) /∈[i, j]. N[i, j, x] can contain e(i,j) but disallows e(x,i) or e(x,j). C C[x, i, a, b](a ̸= b, a > i, b > i) represents a partial analysis associated with an interval from i to max{a, b} inclusively and an external vertex x. Intuitively, C depicts a class of graphs constructed by upper- and lowerplane edges arranged in a staggered pattern. a stands for the last endpoint in the upper plane, and b the last endpoint in the lower plane. We give a definition of C. There exists in C[x, i, a, b] a series {s1, · · · , sm} that fulfills the following constraints: 1. s1 = i < s2 < ... < sm = max{a, b}. 2. ∃e(x,s2). 3. ∀k ∈[1, m −2], ∃e(sk,sk+2). 4. ∀k ∈[1, m −2], ∄e(l,r)(sk, sk+2) ⊂(l, r) ⊂ (s1, sm)2. 5. ∀k ∈[2, m −3], e(sk,sk+2) crosses only with e(sk−1,sk+1) and e(sk+1,sk+3); e(s1,s3) crosses only with e(s2,s4) and e(x,s2); e(sm−2,sm) crosses only with e(sm−3,sm−1). 6. e(x,sm−1), e(s1,sm), e(x,s1), e(x,sm) are disallowed. 7. While a < b, the series can be written as {s1 = i, · · · , sm−1 = a, sm = b}(m ≥5). While b < a, the series is {s1, · · · , sm−1 = 2By “(x, y) ⊂(z, w),” we mean x ≥z, y < w or x > z, y ≤w. 2113 b, sm = a}(m ≥4). We denote the two cases using the signs C1 and C2 respectively. The distinction between C1 and C2 is whether there is one more edge below than above. 4.2 Decomposing an Int Sub-problem Consider an Int[i, j] sub-problem. Assume that k(k ∈(i, j)) is the farthest vertex that is linked with i, and l = pt(i, k). When j −i > 1, there must be such a k given that we consider maximal 1EC/P2 graphs. There are three cases. Case 1: l = j. Vertex k divides the interval [i, j] into two parts: [i, k] and [k, j]. First notice that the edges linking (i, k) and j can only cross with e(i,k). Thus i or k can be the pencil points of those edges, which entails that interval [i, k] is an LR in respect to external vertex j. Because there exist no edge from i to any node in (k, j), interval [k, j] is an Int. The problem is eventually decomposed to: LR[i, k, j] + Int[k, j] + SELECT[i, j]. Case 2: l ∈(k, j). In this case, we can freely add e(i,l) without violating either 1EC or P2 conditions. Therefore Case 2 does not lead to any maximal 1EC/P2 graph. Our algorithm does not need to explicitly handle this case, given that they can be derived from solutions to other cases. Case 3: l ∈(i, k). Now assume that there is an edge from i to a vertex in (l, k). Consider the farthest vertex that is linked with l, say p(p ∈(k, j). We can freely add e(i,p) without violating the 1EC and P2 restrictions. Similar to Case 2, we do not explicitly deal with this case. If there is no edge from i to any vertex in (l, k), then [i, l], [l, k], [k, j] are R, Int, L respectively. Three external edges are e(i,k), e(l,j), and e(i,j). The decomposition is: R[i, l, k]+Int[l, k]+ L[k, j, l] + SELECT[l, j] + SELECT[i, j]. 4.3 Decomposing an L Sub-problem If there is no edge from x to any node in (i, j), the graph is reduced to Int[i, j]. If there is one, let k be the vertex farthest from i and adjacent to x. There are two different cases, as shown in Figure 4. 1. If there exists an edge from x to some node in (i, k), intervals [i, k], [k, j] are classified as L, N respectively. Two edges external to the interval: e(x,k), e(i,j). The decomposition is L[i, k, x]+N[k, j, i]+SELECT[x, k]+ SELECT[i, j]. Case 1: l = j . i . k . j . = . + Case 2: l ∈(k, j) . i . k . l . j Case 3: l ∈(i, k) . Does such a dashed edge exist? . i . l . k . j . (3.1) . i . l . k . j . (3.2) . = . + . + Figure 3: Decomposition for Int[i, j], with pt(i, k) = l. . Does such a dashed edge exist? .x. i . k . j . (2.1) . = . + . (2.2) . = . + Figure 4: Decomposition for L[i, j, x]. 2. Otherwise, Intervals [i, k], [k, j] are classified as Int, L respectively. Two edges external to the interval: e(x,k), e(i,j). The decomposition is Int[i, k] + L[k, j, i] + SELECT[x, k] + SELECT[i, j]. 4.4 Decomposing an R Sub-problem If there is no edge from x to (i, j), then the graph is reduced to Int[i, j]. If there is one, let k be the farthest vertex from j and adjacent to x. There are two different cases: 1. If there exist an edge from x to (k, j), Intervals [i, k], [k, j] are classified as N, R respectively. Two edges external to the interval: e(x,k), e(i,j). The decomposition 2114 . (2) .x. i . k . j . = . + Figure 5: Decomposition for N[i, j, x]. . (3.1) There is a separating vertex. .x. i . k . j . (3.2) No such separating vertex. .x. i . k . b Figure 6: Decomposition for LR[i, j, x]. is N[i, k, j] + R[k, j, x] + SELECT[x, k] + SELECT[i, j]. 2. Otherwise, Intervals [i, k], [k, j] are classified as R, Int respectively. Two edges external to the interval are e(x,k), e(i,j). The decomposition is R[i, k, j]+Int[k, j]+ SELECT[x, k]+ SELECT[i, j]. The decomposition is similar to L, we thus do not give a graphical representation to save space. 4.5 Decomposing an N Sub-problem If there is no edge from x to (i, j), then the graph is reduced to Int[i, j]. If there is one, let k be the farthest vertex from i and adjacent to x. By definition, N[i, j, x] does not allow for e(x,i) or e(x,j). Thus k ̸= i or j. Intervals [i, k], [k, j] are classified as N, Int respectively. Two edges external to the interval are e(x,k), e(i,j). The decomposition is N[i, k, x] + Int[k, j] + SELECT[x, k] + SELECT[i, j]. 4.6 Decomposing an LR Sub-problem If the pencil point of all edges from x to (i, j) is i, then the model is the same as L[i, j, x]. Similary, if the pencil point is j, then the model is the same as R[i, j, x]. If some of the edges from x to (i, j) share a pencil point i, and the others share j, there are two different cases. 1. If there is a k which satisfies that within [i, j], only e(i,j) crosses over k (i.e., [i, j] can be divided along dashed line k into two), then, k divides [i, j] into [i, k] and [k, j]. Because k is not allowed to be pencil point, the two subintervals must be an L and an R in terms of external x, respectively. In addition, there are two edges, namely e(x,k) and e(i,j) not included by the subintervals. The problem is thus decomposed as L[i, k, x] + R[k, j, x] + SELECT[x, k] + SELECT[i, j]. 2. If there is no such k in concord with the condition in (1), it comes a much more difficult case for which we introduce sub-problem C. Here we put forward the conclusion: Lemma 1. Assume that k(k ∈(i, j)) is the vertex that is adjacent to x and farthest from i. The decomposition for the second case is C[x, i, k, j] + SELECT[x, k] + SELECT[i, j]. Proof. The distinction between Case 1 and 2 implies the following property, which is essential, ∀t ∈(i, j), ∃e(pl,pr) such that t ∈(pl, pr) ⊂[i, j]. We can recursively generate a series of length n—{e(slk,srk)}—in LR[i, j, x] as follows. k = 1 Let slk = i, srk = max{p|p ∈(i + 1, j) and ∃e(i,p)}; k > 1 For srk−1, we denote all edges that cover it as e(pl1,pr1), · · · , e(pls,prs). Note that there is at least one such edge. For any two edges in them, viz e(plu,pru) and e(plv,prv), (plu, pru) ⊂ (plv, prv) or (plv, prv) ⊂ (plu, pru). Otherwise, the P2 property no longer holds due to the interaction among e(slk−1,srk−1), e(plu,pru) and e(plv,prv). Assume (plw, prw) is the largest one, then we let slk = plw, srk = prw. When srk = j, recursion ends. We are going to prove that if we delete two edges e(x,srn−1) and e(i,j) from LR[i, j, x], the series {sl1, sl2, sl3, ..., sln−2, sln−1, sln, srn−1, srn} satisfies each and all the conditions of C1. Condition 1. Because e(sln,srn) covers srn−1, Condition 1 holds for k = m−3, m−2. Consider k ≤m −4 = n −2. Assume that sk+1 < sk, then we have e(sk+1,srk+1) is larger than e(sk,srk+1). This is impossible because we select the largest edge in every step. Condition 2. The LR sub-problem we discussed now cannot be reduced to L nor R, so there must be two edges from x that respectively cross edges linked to i and j. We are going to prove that 2115 the two edges must be e(x,s2) and e(x,srn−1). Assume that there is e(x,p), where p ∈(i, j), p ̸= s2 and p ̸= srn−1. If p ∈(i, s2), then e(s1,s3) crosses with e(x,p) and e(s2,s4) simultaneously. 1EC is violated. If p ∈(s2, srn−1), e(x,p) necessarily crosses with some edge e(sk,sk+2). Furthermore, i < sk < sk+2 < j. Thus 1EC is violated. If p ∈(srn−1, j), the situation is similar to p ∈(i, s2). Condition 3. ∀k ∈ [1, n −2], e(slk,srk) and e(slk+1,srk+1) cross, e(slk+1,srk+1) and e(slk+2,srk+2) cross, so srk ≤slk+2. Otherwise the interaction of the three edges results in the violation of P2. If srk < slk+2, e(slk,srk) and e(slk+2,srk+2) share no common endpoint, violating 1EC. Therefore, srk = slk+2 = sk+2, and Condition 3 is satisfied. We also reach proposition that pt(sk, sk+2) = sk+1. Condition 4. This condition is easy to verify because (sk, sk+2) is the largest with respective to srk. Condition 5. Assume, that there is e(pl,pr) which intersects with e(sk,sk+2), and at the same time satisfy the conditions: e(pl,pr) /∈ {e(st,st+2)|t ∈[1, m −2]} ∪{e(x,s2), e(x,srn−1)}. Since pt(sk, sk+2) = sk+1, pl = sk+1 or pr = sk+1. If pl = sk+1, then pl < slk+2 < pr, and in turn k < m −2. In addition, according to Condition 4, (pl, pr) ⊂(sk+1, sk+3). So pr < sk+3. If k = m −3 then e(x,sn−1) crosses with e(pl,pr) and e(i,j) simultaneously. 1EC is violated. If k < m − 3 then e(sk+2,sk+4) cross with e(pl,pr), and pr < sk+3 = pt(e(sk+2,sk+4)). Again 1EC is violated. If pr = sk+1 The symmetry of our proof entails the violation of 1ec. All in all, the assumption does not hold and thus satisfies Condition 5. Condition 6. e(x,s1), e(x,sm) are disallowed due to definition of an LR problem. e(x,sm−1), e(s1,sm) are disallowed due to the decomposition. Condition 7. Due to the existence of e(x,s2) and e(x,srn−1), there must be two edges: e(x,p1) and e(x,p2) that cross e(i,s2) and e(srn−1,j) respectively. There must be an odd number of edges in the series {e(slk,srk)}, otherwise P2 is violated as the case shown in Figure 1. In summary, the last condition . (a) C[i, j, a, b](a < b) .x. i . k . a . b . = . + . (b.1) C[i, j, a, b](a > b), n > 2 .x. i . k . b . a . = . + . (b.2) C[i, j, a, b](a > b), n = 2 .x. i . k . b . a . = . + . + Figure 7: Decomposition for C[x, i, a, b]. is satisfied and we have a C1 structure in this LR sub-problem. 4.7 Decomposing a C Sub-problem We illustrate the decomposition using the graphical representations shown in Figure 7. When a < b, since a is the upper-plane endpoint farthest to the right, and b is the lower-plane counterpart, in this case a precedes b (i.e., a is to the left of b). Let C[x, i, a, k] be a C in which the lower-plane endpoint k precedes a. Add e(k,b) gives a new C sub-problem with lower-plane endpoint preceded by the upper-plane one. The decomposition is then C[x, i, a, k] + Int[a, b] + SELECT[k, b]. When a > b and n > 2, the lower-plane endpoint b precedes a. In analogy, the case can be obtained by adding e(k,a) to C[x, i, k, b]. The decomposition: C[x, i, k, b] + Int[b, a] + SELECT[k, a]. When n = 2, we reach the most fundamental case. Only 4 vertices are in the series, namely i,k,b,a. Moreover, there are three edges: e(x,k), e(i,b), e(k,a), and the interval [i,a] is divided by k,b into three parts. The decomposition is Int[i, k]+Int[k, b]+Int[b, a]+SELECT[x, k]+ SELECT[i, b] + SELECT[k, a]. 4.8 Discussion 4.8.1 Soundness and Completeness The algorithm is sound and complete with respective to 1EC/P2 graphs. We present our algorithms by detailing the decomposition rules. The completeness is obvious because we can decompose any 1EC/P2 graph from an Int, use our rules to reduce it into smaller sub-problems, and repeat this procedure. The decomposition rules are also construction rules. During constructing graphs by applying these rules, we never violate 1EC nor P2 2116 . i . l . k . j .. Int[i, j] . . . Int[k, j] . . LR[i, k, j] . . L[i, k, j] . . . L[l, k, i] . . Int[l, k] . . Int[i, l] . . Int[i, j] . .. . L[k, j, l] . . Int[k, j] . . . Int[l, k] . . R[i, l, k] . . Int[i, l] Figure 8: A maximal 1EC/P2 graph and its two derivations. For brevity, we elide the edges created in each derivation step. restrictions. So our algorithm is sound. 4.8.2 Greedy Search during Construction There is an important difference between our algorithm and Eisner-style MST algorithms (Eisner, 1996b; McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010) for trees as well as Kuhlmann and Jonsson’s Maximum Subgraph algorithm for noncrossing graphs. In each construction step, our algorithm allows multiple arcs to be constructed, but whether or not such arcs are added to the target graph depends on their arc-weights. In each step, we do greedy search and decide if adding an related arc according to local scores. If all arcs are assigned scores that are greater than 0, the output of our algorithm includes the most complicated 1EC/P2 graphs. That means adding one more arc voilates the 1EC or P2 restrictions. For all other aforementioned algorithms, in a single construction step, it is clear whether to add a new arc, and which one. There is no local search. 4.8.3 Spurious Ambiguity To generate the same graph, even a maximal 1EC/P2 graph, we may have different derivations. Figure 8 is an example. This is similar to syntactic analysis licensed by Combinatory Categorial Grammar (CCG; Steedman, 1996, 2000). To derive one surface string, there usually exists multiple CCG derivations. A practice of CCG parsing is defining one particular derivation as the standard one, namely normal form (Eisner, 1996a). The spurious ambiguity in our algorithm does not affect the correctness of first-order parsing, because scores are assigned to individual dependencies, rather than derivation processes. There is no need to distinguish one special derivation here. 4.8.4 Complexity The sub-problem Int is of size O(n2), each graph of which takes a calculating time of order O(n2). For sub-problems L, R, LR, and N, each has O(n3) elements, with a unit calculating time O(n). C has O(n4) elements, with a unit calculating time O(n). Therefore the full version algorithm runs in time of O(n5) with a space requirement of O(n4). 4.9 A Degenerated Version We find that graphical structures involved in the C sub-problem, namely coupled staggered pattern, is extremely rare in linguistic analysis. If we ignore this special case, we get a degenerated version of dynamic programming algorithm. This algorithm can find a strict subset of 1EC/P2 graphs. We can improve efficiency without sacrificing expressiveness in terms of linguistic data. This degenerated version algorithm requires O(n4) time and O(n3) space. 5 Practical Parsing 5.1 Disambiguation We extend our quartic-time parsing algorithm into a practical parser. In the context of data-driven parsing, this requires an extra disambiguation model. As with many other parsers, we employ a global linear model. Following Zhang et al. (2016)’s experience, we define rich features extracted from word, POS-tags and pseudo trees. For details we refer to the source code. To estimate parameters, we utilize the averaged perceptron algorithm (Collins, 2002). 5.2 Data We conduct experiments on unlabeled parsing using four corpora: CCGBank (Hockenmaier and Steedman, 2007), DeepBank (Flickinger et al., 2012), Enju HPSGBank (EnjuBank; Miyao et al., 2004) and Prague Dependency TreeBank (PCEDT; Hajic et al., 2012), We use “standard” training, validation, and test splits to facilitate comparisons. Following previous experimental setup for CCG parsing, we use section 02-21 as training data, section 00 as the development data, and section 23 for testing. The other three data sets are from SemEval 2014 Task 8 (Oepen et al., 2117 DeepBank EnjuBank CCGBank PCEDT UP UR UF UP UR UF UP UR UF UP UR UF P1 90.75 86.13 88.38 93.38 90.20 91.76 94.21 88.55 91.29 90.61 85.69 88.08 1ECP2d 91.05 87.22 89.09 93.41 91.83 92.61 94.41 91.41 92.89 90.76 86.31 88.48 Table 2: Parsing accuracy evaluated on the development sets. DeepBank EnjuBank CCGBank PCEDT UP UR UF UP UR UF UP UR UF UP UR UF Ours 90.91 86.98 88.90 93.83 91.49 92.64 94.23 91.13 92.66 90.09 85.90 87.95 ZDSW 89.04 88.85 88.95 92.92 92.83 92.87 92.49 92.30 92.40 - - - MA 90.14 88.65 89.39 93.18 91.12 92.14 - - - 90.21 85.51 87.80 DSW - - - - - - 93.03 92.03 92.53 - - - Table 3: Parsing accuracy evaluated on the test sets. 2014), and the data splitting policy follows the shared task. All the four data sets are publicly available from LDC (Oepen et al., 2016). Experiments for CCG-grounded analysis were performed using automatically assigned POS-tags that are generated by a symbol-refined HMM tagger (Huang et al., 2010). Experiments for the other three data sets used POS-tags provided by the shared task. We also use features extracted from pseudo trees. We utilize the Mate parser (Bohnet, 2010) to generate pseudo trees. The pre-processing for CCGBank, DeepBank and EnjuBank are exactly the same as in experiments reported in (Zhang et al., 2016). 5.3 Accuracy We evaluate two parsing algorithms, the algorithm for noncrossing dependency graphs (Kuhlmann and Jonsson, 2015), i.e. pagenumber-1 (denoted as P1) graphs, and our quartic-time algorithm (denoted as 1ECP2d). Table 2 summerizes the accuracy obtained our parser. Same feature templates are applied for disambiguation. We can see that our new algorithm yields significant improvements on all data sets, as expected. Especially, due to the improved coverage, the recall is improved more. 5.4 Comparison with Other Parsers Our new parser can be taken as a graph-based parser which employ a different architecture from transition-based and factorization-based (Martins and Almeida, 2014; Du et al., 2015a) systems. We compare our parser with the best reported systems in the other two architectures. ZDSW (Zhang et al., 2016) is transition-based parser while MA (Martins and Almeida, 2014) and DSW (Du et al., 2015a) are two factorization-based systems. All of them achieves state-of-the-art performance. All results on the test set is shown in Table 3. We can see that our parser, as a graph-based parser, is comparable to state-of-the-art transition-based and factorization-based parsers. 6 Conclusion and Future Work In this paper, we explore the strength of the graphbased approach. In particular, we enhance the Maximum Subgraph model with new parsing algorithms for 1EC/P2 graphs. Our work indicates the importance of finding appropriate graph classes that on the one hand are linguistically expressive and on the other hand allow efficient search. Within tree-structured dependency parsing, higher-order factorization that conditions on wider syntactic contexts than arc-factored relationships have been proved very useful. The arcfactored model proposed in this paper may be enhanced with higher-order features too. We leave this for future investigation. Acknowledgments This work was supported by 863 Program of China (2015AA015403), NSFC (61331011), and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the first anonymous reviewer whose valuable comments led to significant revisions. We thank Xingfeng Shi for his help in explicating the idea. Weiwei Sun is the corresponding author. 2118 References Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Coling 2010 Organizing Committee, Beijing, China, pages 89–97. http://www.aclweb.org/anthology/C10-1011. Xavier Carreras. 2007. Experiments with a higherorder projective dependency parser. In In Proc. EMNLP-CoNLL. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1–8. https://doi.org/10.3115/1118693.1118694. Yantao Du, Weiwei Sun, and Xiaojun Wan. 2015a. A data-driven, factorization parser for CCG dependency structures. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1545– 1555. http://www.aclweb.org/anthology/P15-1149. Yantao Du, Fan Zhang, Weiwei Sun, and Xiaojun Wan. 2014. Peking: Profiling syntactic tree parsing techniques for semantic graph parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Association for Computational Linguistics and Dublin City University, Dublin, Ireland, pages 459–464. http://www.aclweb.org/anthology/S14-2080. Yantao Du, Fan Zhang, Xun Zhang, Weiwei Sun, and Xiaojun Wan. 2015b. Peking: Building semantic dependency graphs with a hybrid parser. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Association for Computational Linguistics, Denver, Colorado, pages 927–931. http://www.aclweb.org/anthology/S152154. Jason Eisner. 1996a. Efficient normal-form parsing for combinatory categorial grammar. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL). Santa Cruz, pages 79– 86. Jason M. Eisner. 1996b. Three new probabilistic models for dependency parsing: an exploration. In Proceedings of the 16th conference on Computational linguistics - Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 340–345. Daniel Flickinger, Yi Zhang, and Valia Kordoni. 2012. Deepbank: A dynamically annotated treebank of the wall street journal. In Proceedings of the Eleventh International Workshop on Treebanks and Linguistic Theories. pages 85–96. Carlos G´omez-Rodr´ıguez and Joakim Nivre. 2010. A transition-based parser for 2-planar dependency structures. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, pages 1492–1501. http://www.aclweb.org/anthology/P10-1151. Jan Hajic, Eva Hajicov´a, Jarmila Panevov´a, Petr Sgall, Ondej Bojar, Silvie Cinkov´a, Eva Fuc´ıkov´a, Marie Mikulov´a, Petr Pajas, Jan Popelka, Jir´ı Semeck´y, Jana Sindlerov´a, Jan Step´anek, Josef Toman, Zdenka Uresov´a, and Zdenek Zabokrtsk´y. 2012. Announcing prague czech-english dependency treebank 2.0. In Proceedings of the 8th International Conference on Language Resources and Evaluation. Istanbul, Turkey. James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model. Computational Linguistics 39(4):949–998. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the penn treebank. Computational Linguistics 33(3):355–396. Zhongqiang Huang, Mary Harper, and Slav Petrov. 2010. Self-training with products of latent variable grammars. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Cambridge, MA, pages 12–22. http://www.aclweb.org/anthology/D10-1002. Angelina Ivanova, Stephan Oepen, Lilja Øvrelid, and Dan Flickinger. 2012. Who did what to whom? A contrastive study of syntacto-semantic dependencies. In Proceedings of the Sixth Linguistic Annotation Workshop. Jeju, Republic of Korea, pages 2–11. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, pages 1–11. http://www.aclweb.org/anthology/P10-1001. Marco Kuhlmann and Peter Jonsson. 2015. Parsing to noncrossing dependency graphs. Transactions of the Association for Computational Linguistics 3:559– 570. Andr´e F. T. Martins and Mariana S. C. Almeida. 2014. Priberam: A turbo semantic parser with second order features. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Association for Computational Linguistics and Dublin City University, Dublin, Ireland, pages 471–476. http://www.aclweb.org/anthology/S142082. 2119 Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2006)). volume 6, pages 81–88. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Vancouver, British Columbia, Canada, pages 523–530. Yusuke Miyao, Takashi Ninomiya, and Jun ichi Tsujii. 2004. Corpus-oriented grammar development for acquiring a head-driven phrase structure grammar from the penn treebank. In IJCNLP. pages 684–693. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajiˇc, Angelina Ivanova, and Zdeˇnka Ureˇsov´a. 2016. Semantic Dependency Parsing (SDP) graph banks release 1.0 LDC2016T10. Web Download. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Yi Zhang. 2014. Semeval 2014 task 8: Broad-coverage semantic dependency parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Association for Computational Linguistics and Dublin City University, Dublin, Ireland, pages 63–72. http://www.aclweb.org/anthology/S14-2008. Emily Pitler, Sampath Kannan, and Mitchell Marcus. 2013. Finding optimal 1-endpoint-crossing trees. TACL 1:13–24. http://www.transacl.org/wpcontent/uploads/2013/03/paper13.pdf. M. Steedman. 1996. Surface Structure and Interpretation. Linguistic Inquiry Monographs. Mit Press. http://books.google.ca/books?id=Mh1vQgAACAAJ. Mark Steedman. 2000. The syntactic process. MIT Press, Cambridge, MA, USA. Weiwei Sun, Junjie Cao, and Xiaojun Wan. 2017. Semantic dependency parsing via book embedding. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Weiwei Sun, Yantao Du, Xin Kou, Shuoyang Ding, and Xiaojun Wan. 2014. Grammatical relations in Chinese: GB-ground extraction and data-driven parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 446– 456. http://www.aclweb.org/anthology/P14-1042. Ivan Titov, James Henderson, Paola Merlo, and Gabriele Musillo. 2009. Online graph planarisation for synchronous parsing of semantic and syntactic dependencies. In Proceedings of the 21st international jont conference on Artifical intelligence. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, pages 1562–1567. http://dl.acm.org/citation.cfm?id=1661445.1661696. Xun Zhang, Yantao Du, Weiwei Sun, and Xiaojun Wan. 2016. Transition-based parsing for deep dependency structures. Computational Linguistics 42(3):353–389. http://aclweb.org/anthology/J163001. 2120
2017
193
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2121–2130 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1194 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2121–2130 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1194 Semi-supervised Multitask Learning for Sequence Labeling Marek Rei The ALTA Institute Computer Laboratory University of Cambridge United Kingdom [email protected] Abstract We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset. This language modeling objective incentivises the system to learn general-purpose patterns of semantic and syntactic composition, which are also useful for improving accuracy on different sequence labeling tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. The novel language modeling objective provided consistent performance improvements on every benchmark, without requiring any additional annotated or unannotated data. 1 Introduction Accurate and efficient sequence labeling models have a wide range of applications, including named entity recognition (NER), part-of-speech (POS) tagging, error detection and shallow parsing. Specialised approaches to sequence labeling often include extensive feature engineering, such as integrated gazetteers, capitalisation features, morphological information and POS tags. However, recent work has shown that neural network architectures are able to achieve comparable or improved performance, while automatically discovering useful features for a specific task and only requiring a sequence of tokens as input (Collobert et al., 2011; Irsoy and Cardie, 2014; Lample et al., 2016). This feature discovery is usually driven by an objective function based on predicting the annotated labels for each word, without much incentive to learn more general language features from the available text. In many sequence labeling tasks, the relevant labels in the dataset are very sparse and most of the words contribute very little to the training process. For example, in the CoNLL 2003 NER dataset (Tjong Kim Sang and De Meulder, 2003) only 17% of the tokens represent an entity. This ratio is even lower for error detection, with only 14% of all tokens being annotated as an error in the FCE dataset (Yannakoudakis et al., 2011). The sequence labeling models are able to learn this bias in the label distribution without obtaining much additional information from words that have the majority label (O for outside of an entity; C for correct word). Therefore, we propose an additional training objective which allows the models to make more extensive use of the available data. The task of language modeling offers an easily accessible objective – learning to predict the next word in the sequence requires only plain text as input, without relying on any particular annotation. Neural language modeling architectures also have many similarities to common sequence labeling frameworks: words are first mapped to distributed embeddings, followed by a recurrent neural network (RNN) module for composing word sequences into an informative context representation (Mikolov et al., 2010; Graves et al., 2013; Chelba et al., 2013). Compared to any sequence labeling dataset, the task of language modeling has a considerably larger and more varied set of possible options to predict, making better use of each available word and encouraging the model to learn more general language features for accurate composition. In this paper, we propose a neural sequence labeling architecture that is also optimised as a language model, predicting surrounding words in the dataset in addition to assigning labels to each token. Specific sections of the network are op2121 timised as a forward- or backward-moving language model, while the label predictions are performed using context from both directions. This secondary unsupervised objective encourages the framework to learn richer features for semantic composition without requiring additional training data. We evaluate the sequence labeling model on 10 datasets from the fields of NER, POS-tagging, chunking and error detection in learner texts. Our experiments show that by including the unsupervised objective into the training process, the sequence labeling model achieves consistent performance improvements on all the benchmarks. This multitask training framework gives the largest improvements on error detection datasets, outperforming the previous state-of-the-art architecture. 2 Neural Sequence Labeling We use the neural network model of Rei et al. (2016) as the baseline architecture for our sequence labeling experiments. The model takes as input one sentence, separated into tokens, and assigns a label to every token using a bidirectional LSTM. The input tokens are first mapped to a sequence of distributed word embeddings [x1, x2, x3, ..., xT ]. Two LSTM (Hochreiter and Schmidhuber, 1997) components, moving in opposite directions through the sentence, are then used for constructing context-dependent representations for every word. Each LSTM takes as input the hidden state from the previous time step, along with the word embedding from the current step, and outputs a new hidden state. The hidden representations from both directions are concatenated, in order to obtain a context-specific representation for each word that is conditioned on the whole sentence in both directions: −→ ht = LSTM(xt, −−→ ht−1) (1) ←− ht = LSTM(xt, ←−− ht+1) (2) ht = [−→ ht; ←− ht] (3) Next, the concatenated representation is passed through a feedforward layer, mapping the components into a joint space and allowing the model to learn features based on both context directions: dt = tanh(Wdht) (4) where Wd is a weight matrix and tanh is used as the non-linear activation function. In order to predict a label for each token, we use either a softmax or CRF output architecture. For softmax, the model directly predicts a normalised distribution over all possible labels for every word, conditioned on the vector dt: P(yt|dt) = softmax(Wodt) = eWo,kdt P ˜k∈K eWo,˜kdt (5) where K is the set of all possible labels, and Wo,k is the k-th row of output weight matrix Wo. The model is optimised by minimising categorical crossentropy, which is equivalent to minimising the negative log-probability of the correct labels: E = − T X t=1 log(P(yt|dt)) (6) While this architecture returns predictions based on all words in the input, the labels are still predicted independently. For some tasks, such as named entity recognition with a BIO1 scheme, there are strong dependencies between subsequent labels and it can be beneficial to explicitly model these connections. The output of the architecture can be modified to include a Conditional Random Field (CRF, Lafferty et al. (2001)), which allows the network to look for the most optimal path through all possible label sequences (Huang et al., 2015; Lample et al., 2016). The model is then optimised by maximising the score for the correct label sequence, while minimising the scores for all other sequences: E = −s(y) + log X ˜y∈eY es(˜y) (7) where s(y) is the score for a given sequence y and Y is the set of all possible label sequences. We also make use of the character-level component described by Rei et al. (2016), which builds an alternative representation for each word. The individual characters of a word are mapped to character embeddings and passed through a bidirectional LSTM. The last hidden states from both direction are concatenated and passed through a 1Each NER entity has sub-tags for Beginning, Inside and Outside 2122 h2 x2 d2 o2 h2 proposes m2 q2 m2 q2 O Fischler measures h3 x3 d3 o3 h3 measures m3 q3 m3 q3 O proposes </s> h1 x1 d1 o1 h1 Fischler m1 q1 m1 q1 PER <s> proposes Figure 1: The unfolded network structure for a sequence labeling model with an additional language modeling objective, performing NER on the sentence ”Fischler proposes measures”. The input tokens are shown at the bottom, the expected output labels are at the top. Arrows above variables indicate the directionality of the component (forward or backward). nonlinear layer. The resulting vector representation is combined with a regular word embedding using a dynamic weighting mechanism that adaptively controls the balance between word-level and character-level features. This framework allows the model to learn character-based patterns and handle previously unseen words, while still taking full advantage of the word embeddings. 3 Language Modeling Objective The sequence labeling model in Section 2 is only optimised based on the correct labels. While each token in the input does have a desired label, many of these contribute very little to the training process. For example, in the CoNLL 2003 NER dataset (Tjong Kim Sang and De Meulder, 2003) there are only 8 possible labels and 83% of the tokens have the label O, indicating that no named entity is detected. This ratio is even higher for error detection, with 86% of all tokens containing no errors in the FCE dataset (Yannakoudakis et al., 2011). The sequence labeling models are able to learn this bias in the label distribution without obtaining much additional information from the majority labels. Therefore, we propose a supplementary objective which would allow the models to make full use of the training data. In addition to learning to predict labels for each word, we propose optimising specific sections of the architecture as language models. The task of predicting the next word will require the model to learn more general patterns of semantic and syntactic composition, which can then be reused in order to predict individual labels more accurately. This objective is also generalisable to any sequence labeling task and dataset, as it requires no additional annotated training data. A straightforward modification of the sequence labeling model would add a second parallel output layer for each token, optimising it to predict the next word. However, the model has access to the full context on each side of the target token, and predicting information that is already explicit in the input would not incentivise the model to learn about composition and semantics. Therefore, we must design the loss objective so that only sections of the model that have not yet observed the next word are optimised to perform the prediction. We solve this by predicting the next word in the sequence only based on the hidden representation −→ ht from the forward-moving LSTM. Similarly, the previous word in the sequence is predicted based on ←− ht from the backward-moving LSTM. This architecture avoids the problem of giving the correct answer as an input to the language modeling component, while the full framework is still optimised to predict labels based on the whole sentence. First, the hidden representations from forwardand backward-LSTMs are mapped to a new space using a non-linear layer: −→ mt = tanh(−→ W m −→ ht) (8) ←− mt = tanh(←− W m ←− ht) (9) where −→ W m and ←− W m are weight matrices. This separate transformation learns to extract features that are specific to language modeling, while the LSTM is optimised for both objectives. We also use the opportunity to map the representation to a smaller size – since language modeling is not the 2123 main goal, we restrict the number of parameters available for this component, forcing the model to generalise more using fewer resources. These representations are then passed through softmax layers in order to predict the preceding and following word: P(wt+1|−→ mt) = softmax(−→ W q−→ mt) (10) P(wt−1|←− mt) = softmax(←− W q←− mt) (11) The objective function for both components is then constructed as a regular language modeling objective, by calculating the negative loglikelihood of the next word in the sequence: −→ E = − T−1 X t=1 log(P(wt+1|−→ mt)) (12) ←− E = − T X t=2 log(P(wt−1|←− mt)) (13) Finally, these additional objectives are combined with the training objective E from either Equation 6 or 7, resulting in a new cost function eE for the sequence labeling model: eE = E + γ(−→ E + ←− E ) (14) where γ is a parameter that is used to control the importance of the language modeling objective in comparison to the sequence labeling objective. Figure 1 shows a diagram of the unfolded neural architecture, when performing NER on a short sentence with 3 words. At each token position, the network is optimised to predict the previous word, the current label, and the next word in the sequence. The added language modeling objective encourages the system to learn richer feature representations that are then reused for sequence labeling. For example, −→ h1 is optimised to predict proposes as the next word, indicating that the current word is a subject, possibly a named entity. In addition, ←− h2 is optimised to predict Fischler as the previous word and these features are used as input to predict the PER tag at o1. The proposed architecture introduces 4 additional parameter matrices that are optimised during training: −→ W m, ←− W m, −→ W q, and ←− W q. However, the computational complexity and resource requirements of this model during sequence labeling are equal to the baseline from Section 2, since the language modeling components are ignored during testing and these additional weight matrices are not used. While our implementation uses a basic softmax as the output layer for the language modeling components, the efficiency during training could be further improved by integrating noise-contrastive estimation (NCE, Mnih and Teh (2012)) or hierarchical softmax (Morin and Bengio, 2005). 4 Evaluation Setup The proposed architecture was evaluated on 10 different sequence labeling datasets, covering the tasks of error detection, NER, chunking, and POStagging. The word embeddings in the model were initialised with publicly available pretrained vectors, created using word2vec (Mikolov et al., 2013). For general-domain datasets we used 300-dimensional embeddings trained on Google News.2 For biomedical datasets, the word embeddings were initialised with 200-dimensional vectors trained on PubMed and PMC.3 The neural network framework was implemented using Theano (Al-Rfou et al., 2016) and we make the code publicly available online.4 For most of the hyperparameters, we follow the settings by Rei et al. (2016) in order to facilitate direct comparison with previous work. The LSTM hidden layers are set to size 200 in each direction for both word- and character-level components. All digits in the text were replaced with the character 0; any words that occurred only once in the training data were replaced by an OOV token. In order to reduce computational complexity in these experiments, the language modeling objective predicted only the 7,500 most frequent words, with an extra token covering all the other words. Sentences were grouped into batches of size 64 and parameters were optimised using AdaDelta (Zeiler, 2012) with default learning rate 1.0. Training was stopped when performance on the development set had not improved for 7 epochs. Performance on the development set was also used to select the best model, which was then evaluated on the test set. In order to avoid any outlier results due to randomness in the model initialisa2https://code.google.com/archive/p/word2vec/ 3http://bio.nlplab.org/ 4https://github.com/marekrei/sequence-labeler 2124 FCE DEV FCE TEST CoNLL-14 TEST1 CoNLL-14 TEST2 F0.5 P R F0.5 P R F0.5 P R F0.5 Baseline 48.78 55.38 25.34 44.56 15.65 16.80 15.80 25.22 19.25 23.62 + dropout 48.68 54.11 23.33 42.65 14.29 17.13 14.71 22.79 19.42 21.91 + LMcost 53.17 58.88 28.92 48.48 17.68 19.07 17.86 27.62 21.18 25.88 Table 1: Precision, Recall and F0.5 score of alternative sequence labeling architectures on error detection datasets. Dropout and LMcost modifications are added incrementally to the baseline. tion, each configuration was trained with 10 different random seeds and the averaged results are presented in this paper. We use previously established splits for training, development and testing on each of these datasets. Based on development experiments, we found that error detection was the only task that did not benefit from having a CRF module at the output layer – since the labels are very sparse and the dataset contains only 2 possible labels, explicitly modeling state transitions does not improve performance on this task. Therefore, we use a softmax output for error detection experiments and CRF on all other datasets. The publicly available sequence labeling system by Rei et al. (2016) is used as the baseline. During development we found that applying dropout (Srivastava et al., 2014) on word embeddings improves performance on nearly all datasets, compared to this baseline. Therefore, elementwise dropout was applied to each of the input embeddings with probability 0.5 during training, and the weights were multiplied by 0.5 during testing. In order to separate the effects of this modification from the newly proposed optimisation method, we report results for three different systems: 1) the publicly available baseline, 2) applying dropout on top of the baseline system, and 3) applying both dropout and the novel multitask objective from Section 3. Based on development experiments we set the value of γ, which controls the importance of the language modeling objective, to 0.1 for all experiments throughout training. Since context prediction is not part of the main evaluation of sequence labeling systems, we expected the additional objective to mostly benefit early stages of training, whereas the model would later need to specialise only towards assigning labels. Therefore, we also performed experiments on the development data where the value of γ was gradually decreased, but found that a small static value performed comparably well or even better. These experiments indicate that the language modeling objective helps the network learn general-purpose features that are useful for sequence labeling even in the later stages of training. 5 Error Detection We first evaluate the sequence labeling architectures on the task of error detection – given a sentence written by a language learner, the system needs to detect which tokens have been manually tagged by annotators as being an error. As the first benchmark, we use the publicly released First Certificate in English (FCE, Yannakoudakis et al. (2011)) dataset, containing 33,673 manually annotated sentences. The texts were written by learners during language examinations in response to prompts eliciting free-text answers and assessing mastery of the upper-intermediate proficiency level. In addition, we evaluate on the CoNLL 2014 shared task dataset (Ng et al., 2014), which has been converted to an error detection task. This contains 1,312 sentences, written by higher-proficiency learners on more technical topics. They have been manually corrected by two separate annotators, and we report results on each of these annotations. For both datasets we use the FCE training set for model optimisation and results on the CoNLL-14 dataset indicate outof-domain performance. Rei and Yannakoudakis (2016) present results on these datasets using the same setup, along with evaluating the top shared task submissions on the task of error detection. As the main evaluation metric, we use the F0.5 measure, which is consistent with previous work and was also adopted by the CoNLL-14 shared task. Table 1 contains results for the three different sequence labeling architectures on the error detection datasets. We found that including the dropout actually decreases performance in the setting of 2125 CoNLL-00 CoNLL-03 CHEMDNER JNLPBA DEV TEST DEV TEST DEV TEST DEV TEST Baseline 92.92 92.67 90.85 85.63 83.63 84.51 77.13 72.79 + dropout 93.40 93.15 91.14 86.00 84.78 85.67 77.61 73.16 + LMcost 94.22 93.88 91.48 86.26 85.45 86.27 78.51 73.83 Table 2: Performance of alternative sequence labeling architectures on NER and chunking datasets, measured using CoNLL standard entity-level F1 score. error detection, which is likely due to the relatively small amount of error examples available in the dataset – it is better for the model to memorise them without introducing additional noise in the form of dropout. However, we did verify that dropout indeed gives an improvement in combination with the novel language modeling objective. Because the model is receiving additional information at every token, dropout is no longer obscuring the limited training data but instead helps with generalisation. The bottom row shows the performance of the language modeling objective when added on top of the baseline model, along with dropout on word embeddings. This architecture outperforms the baseline on all benchmarks, increasing both precision and recall, and giving a 3.9% absolute improvement on the FCE test set. These results also improve over the previous best results by Rei and Yannakoudakis (2016) and Rei et al. (2016), when all systems are trained on the same FCE dataset. While the added components also require more computation time, the difference is not excessive – one training batch over the FCE dataset was processed in 112 seconds on the baseline system and 133 seconds using the language modeling objective. Error detection is the task where introducing the additional cost objective gave the largest improvement in performance, for a few reasons: 1. This task has very sparse labels in the datasets, with error tokens very infrequent and far apart. Without the language modeling objective, the network has very little use for all the available words that contain no errors. 2. There are only two possible labels, correct and incorrect, which likely makes it more difficult for the model to learn feature detectors for many different error types. Language modeling uses a much larger number of possible labels, giving a more varied training signal. 3. Finally, the task of error detection is directly related to language modeling. By learning a better model of the overall text in the training corpus, the system can more easily detect any irregularities. We also analysed the performance of the different architectures during training. Figure 2 shows the F0.5 score on the development set for each model after every epoch over the training data. The baseline model peaks quickly, followed by a gradual drop in performance, which is likely due to overfitting on the available data. Dropout provides an effective regularisation method, slowing down the initial performance but preventing the model from overfitting. The added language modeling objective provides a substantial improvement – the system outperforms other configurations already in the early stages of training and the results are also sustained in the later epochs. Figure 2: F0.5 score on the FCE development set after each training epoch. 6 NER and Chunking In the next experiments we evaluate the language modeling objective on named entity recognition and chunking. For general-domain NER, we use 2126 GENIA-POS PTB-POS UD-ES UD-FI DEV TEST DEV TEST DEV TEST DEV TEST Baseline 98.69 98.61 97.23 97.24 96.38 95.99 95.02 94.80 + dropout 98.79 98.71 97.36 97.30 96.51 96.16 95.88 95.60 + LMcost 98.89 98.81 97.48 97.43 96.62 96.21 96.14 95.88 Table 3: Accuracy of different sequence labeling architectures on POS-tagging datasets. the English section of the CoNLL 2003 corpus (Tjong Kim Sang and De Meulder, 2003), containing news stories from the Reuters Corpus. We also report results on two biomedical NER datasets: The BioCreative IV Chemical and Drug corpus (CHEMDNER, Krallinger et al. (2015)) of 10,000 abstracts, annotated for mentions of chemical and drug names, and the JNLPBA corpus (Kim et al., 2004) of 2,404 abstracts annotated for mentions of different cells and proteins. Finally, we use the CoNLL 2000 dataset (Tjong Kim Sang and Buchholz, 2000), created from the Wall Street Journal Sections 15-18 and 20 from the Penn Treebank, for evaluating sequence labeling on the task of chunking. The standard CoNLL entity-level F1 score is used as the main evaluation metric. Compared to error detection corpora, the labels are more balanced in these datasets. However, majority labels still exist: roughly 83% of the tokens in the NER datasets are tagged as ”O”, indicating that the word is not an entity, and the NP label covers 53% of tokens in the chunking data. Table 2 contains results for evaluating the different architectures on NER and chunking. On these tasks, the application of dropout provides a consistent improvement – applying some variance onto the input embeddings results in more robust models for NER and chunking. The addition of the language modeling objective consistently further improves performance on all benchmarks. While these results are comparable to the respective state-of-the-art results on most datasets, we did not fine-tune hyperparameters for any specific task, instead providing a controlled analysis of the language modeling objective in different settings. For JNLPBA, the system achieves 73.83% compared to 72.55% by Zhou and Su (2004) and 72.70% by Rei et al. (2016). On CoNLL-03, Lample et al. (2016) achieve a considerably higher result of 90.94%, possibly due to their use of specialised word embeddings and a custom version of LSTM. However, our system does outperform a similar architecture by Huang et al. (2015), achieving 86.26% compared to 84.26% F1 score on the CoNLL-03 dataset. Figure 3 shows F1 on the CHEMDNER development set after every training epoch. Without dropout, performance peaks quickly and then trails off as the system overfits on the training set. Using dropout, the best performance is sustained throughout training and even slightly improved. Finally, adding the language modeling objective on top of dropout allows the system to consistently outperform the other architectures. Figure 3: Entity-level F1 score on the CHEMDNER development set after each training epoch. 7 POS tagging We also evaluated the language modeling training objective on four POS-tagging datasets. The Penn Treebank POS-tag corpus (Marcus et al., 1993) contains texts from the Wall Street Journal and has been annotated with 48 different part-of-speech tags. In addition, we use the POS-annotated subset of the GENIA corpus (Ohta et al., 2002) containing 2,000 biomedical PubMed abstracts. Following Tsuruoka et al. (2005), we use the same 210document test set. Finally, we also evaluate on the Finnish and Spanish sections of the Universal Dependencies v1.2 dataset (UD, Nivre et al. (2015)), in order to investigate performance on morphologically complex and Romance languages. 2127 These datasets are somewhat more balanced in terms of label distributions, compared to error detection and NER, as no single label covers over 50% of the tokens. POS-tagging also offers a large variance of unique labels, with 48 labels in PTB and 42 in GENIA, and this can provide useful information to the models during training. The baseline performance on these datasets is also close to the upper bound, therefore we expect the language modeling objective to not provide much additional benefit. The results of different sequence labeling architectures on POS-tagging can be seen in Table 3 and accuracy on the development set is shown in Figure 4. While the performance improvements are small, they are consistent across all domains, languages and datasets. Application of dropout again provides a more robust model, and the language modeling cost improves the performance further. Even though the labels already offer a varied training objective, learning to predict the surrounding words at the same time has provided the model with additional general-purpose features. 8 Related Work Our work builds on previous research exploring multi-task learning in the context of different sequence labeling tasks. The idea of multi-task learning was described by Caruana (1998) and has since been extended to many language processing tasks using neural networks. For example, Collobert and Weston (2008) proposed a multitask framework using weight-sharing between networks that are optimised for different supervised tasks. Cheng et al. (2015) described a system for detecting out-of-vocabulary names by also predicting the next word in the sequence. While they use a regular recurrent architecture, we propose a language modeling objective that can be integrated with a bidirectional network, making it applicable to existing state-of-the-art sequence labeling frameworks. Plank et al. (2016) described a related architecture for POS-tagging, predicting the frequency of each word together with the part-of-speech, and showed that this can improve tagging accuracy on low-frequency words. While predicting word frequency can be useful for POS-tagging, language modeling provides a more general training signal, allowing us to apply the model to many different Figure 4: Token-level accuracy on the PTB-POS development set after each training epoch. sequence labeling tasks. Recently, Augenstein and Søgaard (2017) explored multi-task learning for classifying keyphrase boundaries, by incorporating tasks such as semantic super-sense tagging and identification of multi-word expressions. Bingel and Søgaard (2017) also performed a systematic comparison of task relationships by combining different datasets through multi-task learning. Both of these approaches involve switching to auxiliary datasets, whereas our proposed language modeling objective requires no additional data. 9 Conclusion We proposed a novel sequence labeling framework with a secondary objective – learning to predict surrounding words for each word in the dataset. One half of a bidirectional LSTM is trained as a forward-moving language model, whereas the other half is trained as a backward-moving language model. At the same time, both of these are also combined, in order to predict the most probable label for each word. This modification can be applied to several common sequence labeling architectures and requires no additional annotated or unannotated data. The objective of learning to predict surrounding words provides an additional source of information during training. The model is incentivised to discover useful features in order to learn the language distribution and composition patterns in the training data. While language modeling is not the main goal of the system, this additional training objective leads to more accurate sequence labeling models on several different tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in 2128 learner texts, named entity recognition, chunking and POS-tagging. We found that the additional language modeling objective provided consistent performance improvements on every benchmark. The largest benefit from the new architecture was observed on the task of error detection in learner writing. The label distribution in the original dataset is very sparse and unbalanced, making it a difficult task for the model to learn. The added language modeling objective allowed the system to take better advantage of the available training data, leading to 3.9% absolute improvement over the previous best architecture. The language modeling objective also provided consistent improvements on other sequence labeling tasks, such as named entity recognition, chunking and POS-tagging. Future work could investigate the extension of this architecture to additional unannotated resources. Learning generalisable language features from large amounts of unlabeled in-domain text could provide sequence labeling models with additional benefit. While it is common to pretrain word embeddings on large-scale unannotated corpora, only limited research has been done towards useful methods for pre-training or cotraining more advanced compositional modules. References Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Fr´ed´eric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, and Others. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.0:19. http://arxiv.org/abs/1605.02688. Isabelle Augenstein and Anders Søgaard. 2017. MultiTask Learning of Keyphrase Boundary Classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. http://arxiv.org/abs/1704.00514. Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In arXiv preprint. http://arxiv.org/abs/1702.08303. Rich Caruana. 1998. Multitask Learning. Ph.D. thesis. Ciprian Chelba, Tom´aˇs Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling. In arXiv preprint. http://arxiv.org/abs/1312.3005. Hao Cheng, Hao Fang, and Mari Ostendorf. 2015. Open-Domain Name Error Detection using a MultiTask RNN. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Ronan Collobert and Jason Weston. 2008. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. Proceedings of the 25th international conference on Machine learning (ICML ’08) https://doi.org/10.1145/1390156.1390177. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research 12. https://doi.org/10.1145/2347736.2347755. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. International Conference on Acoustics, Speech and Signal Processing (ICASSP) https://doi.org/10.1109/ICASSP.2013.6638947. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-term Memory. Neural Computation 9. https://doi.org/10.1.1.56.7752. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF Models for Sequence Tagging. arXiv:1508.01991 http://arxiv.org/pdf/1508.01991v1.pdf. Ozan Irsoy and Claire Cardie. 2014. Opinion Mining with Deep Recurrent Neural Networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Jin-Dong Kim, Tomoko Ohta, Yoshimasa Tsuruoka, Yuka Tateisi, and Nigel Collier. 2004. Introduction to the Bio-entity Recognition Task at JNLPBA. Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its Applications https://doi.org/10.3115/1567594.1567610. Martin Krallinger, Florian Leitner, Obdulia Rabal, Miguel Vazquez, Julen Oyarzabal, and Alfonso Valencia. 2015. CHEMDNER: The drugs and chemical names extraction challenge. Journal of Cheminformatics 7(Suppl 1). https://doi.org/10.1186/17582946-7-S1-S1. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recognition. In Proceedings of NAACL-HLT 2016. 2129 Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19. https://doi.org/10.1162/coli.2010.36.1.36100. Tom´aˇs Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. In Proceedings of the International Conference on Learning Representations (ICLR 2013). https://doi.org/10.1162/153244303322533223. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent Neural Network based Language Model. Interspeech (September):1045–1048. Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In Neural Information Processing Systems (NIPS). Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics https://doi.org/10.1109/JCDL.2003.1204852. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 Shared Task on Grammatical Error Correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. http://www.aclweb.org/anthology/W/W14/W141701. Joakim Nivre, ˇZeljko Agi´c, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Miguel Ballesteros, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Cristina Bosco, Sam Bowman, et al. 2015. Universal dependencies 1.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University. http://hdl.handle.net/11234/1-1548. Tomoko Ohta, Yuka Tateisi, and Jin-Dong Kim. 2002. The GENIA corpus: An annotated research abstract corpus in molecular biology domain. Proceedings of the second international conference on Human Language Technology Research http://portal.acm.org/citation.cfm?id=1289260. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-16). pages 412–418. http://arxiv.org/abs/1604.05529. Marek Rei, Gamal K. O. Crichton, and Sampo Pyysalo. 2016. Attending to Characters in Neural Sequence Labeling Models. In Proceedings of the 26th International Conference on Computational Linguistics (COLING-2016). http://arxiv.org/abs/1611.04361. Marek Rei and Helen Yannakoudakis. 2016. Compositional Sequence Labeling Models for Error Detection in Learner Writing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. https://aclweb.org/anthology/P/P16/P16-1112.pdf. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout : A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research (JMLR) 15. https://doi.org/10.1214/12-AOS1000. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. Proceedings of the 2nd Workshop on Learning Language in Logic and the 4th Conference on Computational Natural Language Learning 7. https://doi.org/10.3115/1117601.1117631. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003. http://arxiv.org/abs/cs/0306050. Yoshimasa Tsuruoka, Yuka Tateishi, Jin Dong Kim, Tomoko Ohta, John McNaught, Sophia Ananiadou, and Jun’ichi Tsujii. 2005. Developing a robust partof-speech tagger for biomedical text. In Proceedings of Panhellenic Conference on Informatics. Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A New Dataset and Method for Automatically Grading ESOL Texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. http://www.aclweb.org/anthology/P11-1019. Matthew D. Zeiler. 2012. ADADELTA: An Adaptive Learning Rate Method. arXiv preprint arXiv:1212.5701 http://arxiv.org/abs/1212.5701. GuoDong Zhou and Jian Su. 2004. Exploring Deep Knowledge Resources in Biomedical Name Recognition. Workshop on Natural Language Processing in Biomedicine and Its Applications at COLING . 2130
2017
194
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2131–2141 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1195 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 2131–2141 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1195 Semantic Parsing of Pre-university Math Problems Takuya Matsuzaki1, Takumi Ito1, Hidenao Iwane2, Hirokazu Anai2, Noriko H. Arai3 1 Nagoya University, Japan {matuzaki,takumi i}@nuee.nagoya-u.ac.jp 2 Fujitsu Laboratories Ltd., Japan {iwane,anai}@jp.fujitsu.com 3 National Institute of Informatics, Japan [email protected] Abstract We have been developing an end-to-end math problem solving system that accepts natural language input. The current paper focuses on how we analyze the problem sentences to produce logical forms. We chose a hybrid approach combining a shallow syntactic analyzer and a manuallydeveloped lexicalized grammar. A feature of the grammar is that it is extensively typed on the basis of a formal ontology for pre-university math. These types are helpful in semantic disambiguation inside and across sentences. Experimental results show that the hybrid system produces a well-formed logical form with 88% precision and 56% recall. 1 Introduction Frege and Russell, the initiators of the mathematical logic, delved also into the exploration of a theory of natural language semantics (Frege, 1892; Russell, 1905). Since then, symbolic logic has been a fundamental tool and a source of inspiration in the study of language meaning. It suggests that the formalization of the two realms, mathematical reasoning and language meaning, is actually the two sides of the same coin – probably, we could not even conceive the idea of formalizing language meaning without grounding it onto mathematical reasoning. This point was first clarified by Tarski (1936; 1944) mainly on formal languages and then extended to natural languages by Davidson (1967). Montague (1970a; 1970b; 1973) further embodied it by putting forward a terrifyingly arrogant and attractive idea of seeing a natural language as a formal language. The automation of end-to-end math problem solving thus has an outstanding status in the reDefine the two straight lines L1 and L2 on the xy-plane as L1: y = 0 (x-axis) and L2: y = √ 3x. Let P be a point on the xy-plane. Let Q be the point symmetric to P about the straight line L1, and let R be the point symmetric to P about the straight line L2. Answer the following questions: (1) Let (a, b) be the coordinates of P, then represent the coordinates of R using a and b. (2) Assuming that the distance between the two points Q and R is 2, find the locus C of P. (3) When the point P moves on C, find the maximum area of the triangle PQR and the coordinates of P that gives the maximum area. (Hokkaido Univ., 1999-Sci-3) Figure 1: Example problem search themes in natural language processing. The conceptual basis has been laid down, which connects text to the truth (= answer) through reasoning. However, we have not seen a fully automated system that instantiates it end-to-end. We wish to add a piece to the big picture by materializing it. Past studies have mainly targeted at primary school level arithmetic word problems (Bobrow, 1964; Charniak, 1969; Kushman et al., 2014; Hosseini et al., 2014; Shi et al., 2015; Roy and Roth, 2015; Zhou et al., 2015; Koncel-Kedziorski et al., 2015; Mitra and Baral, 2016; Upadhyay et al., 2016). In their nature, arithmetic questions are quantifier-free. Moreover they tend to include only ∧(and) as the logical connective. The main challenge in these works was to extract simple numerical relations (most typically equations) from a real-world scenario described in a text. Seo et al. (2015) took SAT geometry questions as their benchmark. However, the nature of SAT geometry questions restricts the resulting formula’s complexity. In §3, we will show that none of them includes ∀(for all), ∨(or) or →(implies). It suggests that this type of questions require little need to analyze the logical structure of the problems beyond conjunctions of predicates. 2131 Problem shallow parsing coreference resolution math expr. analysis semantic parsing discourse parsing formula rewriting reasoning Solution Lexicon Ontology Axioms CAS ATP Type constraint Figure 2: Overview of the end-to-end math problem solving system We take pre-university math problems falling in the theory of real-closed fields (RCF) as our benchmark because of their variety and complexity. The subject areas include real and linear algebra, complex numbers, calculus, and geometry. Furthermore, many problems involve more than one subject: e.g., algebraic curves and calculus as in Fig. 1. Their logical forms include all the logical connectives, quantifiers, and λ-abstraction. Our goal is to recognize the complex logical structures precisely, including the scopes of the quantifiers and other logical operators. In the rest of the paper, we first present an overview of an end-to-end problem solving system (§2) and analyze the complexity of the preuniversity math benchmark in comparison with others (§3). Among the modules in the end-to-end system, we focus on the sentence-level semantic parsing component and describe an extensivelytyped grammar (§4 and §5), an analyzer for the math expressions in the text (§6), and two semantic parsing techniques to fight against the scarcity of the training data (§7) and the complexity of the domain (§8). Experimental results show the effectiveness of the presented techniques as well as the complexity of the task through an in-depth analysis of the end-to-end problem solving results (§9). 2 End-to-end Math Problem Solving Fig. 2 presents an overview of our end-to-end math problem solving system. A math problem text is firstly analyzed with a dependency parser. Anaphoric and coreferential expressions in the text are then identified and their antecedents are determined. We assume the math formulas in the problems are encoded in MathML presentation mark-up. A specialized parser processes each one of them to determine its syntactic category and semantic content. The semantic representation of each sentence is determined by a semantic parser based on Combinatory Categorial Grammar (CCG) (Steedman, 2001, 2012). The output from the CCG parser is a ranked list of sentence-level logical forms for each sentence. Dataset Succeeded Failed Success% Avg. Timeout Other Time DEV 75.3% (131/174) 10.5s 16.7% 8.1% TEST 78.2% (172/220) 16.2s 15.0% 6.8% Table 1: Performance of the reasoning module on manually formalized pre-university problems After the sentence-level processing steps, we determine the logical relations among the sentence-level logical forms (discourse parsing) by a simple rule-based system. It produces a tree structure whose leaves are labeled with sentences and internal nodes with logical connectives. Free variables in the logical form are then bound by some quantifiers (or kept free) and their scopes are determined according to the logical structure of the problem. A semantic representation of a problem is obtained as a formula in a higher-order logic through these language analysis steps. The logical representation is then rewritten using a set of axioms that define the meanings of the predicate and function symbols in the formula, such as maximum defined as follows: maximum(x, S) ↔x ∈S ∧∀y(y ∈S →y ≤x), as well as several logical rules such as βreduction. We hope to obtain a representation of the initial problem expressed in a decidable math theory such as RCF through these equivalencepreserving rewriting. Once we find such a formula, we invoke a computer algebra system (CAS) or an automatic theorem prover (ATP) to derive the answer. The reasoning module (i.e., the formula rewriting and the deduction with CAS and ATP) of the system has been extensively tested on a large collection of manually formalized pre-university math problems that includes more than 1,500 problems. It solves 70% of the them in the time limit of 10 minutes per problem. Table 1 shows the rate of successfully solved problems in the manually formalized version of the benchmark problems used in the current paper. 2132 ProbAvg. Avg. Uniq. Atoms ∃ ∀ λ ∧ ∨ ¬ → Unique lems tokens sents. words sketches JOBS 640 9.83 1.00 391 4.63 1.71 0.00 0.00 1.06 0.01 0.13 0.00 8 GEOQUERY 880 8.56 1.00 284 4.25 1.70 0.00 1.04 1.18 0.00 0.02 0.00 20 GEOMETRY 119 23.64 1.74 202 11.00 7.45 0.00 0.06 1.00 0.00 0.04 0.00 4 UNIV (DEV) 174 70.34 3.45 363 10.99 5.10 1.10 1.11 1.71 0.02 0.49 0.35 76 UNIV (TEST) 220 70.85 4.02 366 9.70 4.58 1.10 1.00 1.62 0.02 0.28 0.23 72 Table 2: Profile of pre-university math benchmark data and other semantic parsing benchmark data sets JOBS GEOQUERY GEOMETRY UNIV (DEV) ∃P 81% ∃P 46% ∃P 94% ∃P 25% P 6% ∃P(λ∃P) 24% ∃(P ∧¬P) 3% ∃(P ∧¬P) 7% ∃(P ∧¬∃P) 5% P(λ∃P) 8% ∃(P ∧P(λP)) 2% P(λ∃P) 5% ∃(P ∧¬P) 5% ∃(P ∧P(λ∃P)) 7% P(λ∃P) 1% ∃(P ∧P(λf)) 4% 97% 85% 100% 41% Table 3: Top four most frequest sketches and their coverage over the dataset Sketch Freq. ∀(P →∃(∀(P →P)∧P)) 2 ∃(∃(¬P ∧P)∧P ∧P(λf))∧P(λ(P →P))) 1 ∃(P ∧P(λ(¬P ∧∃(∃P ∧P)))) 1 ∃(P ∧P(λf))∧P(λ(¬P ∧P))∧P(λP)) 1 Table 4: Less frequent sketches in UNIV (DEV) 3 Profile of the Benchmark Data Our benchmark problems, UNIV, were collected from the past entrance exams of seven top-ranked universities in Japan. In the exams held in odd numbered years from 1999 to 2013, we exhaustively selected the problems which are ultimately expressible in RCF. They occupied 40% of all the problems. We divided the problems into two sets: DEV for development (those from year 1999 to 2005) and TEST for test (those from year 2007 to 2013). DEV was used for the lexicon development and the tuning of the end-to-end system. The problem texts (both in English and Japanese) with MathML mark-up and manually translated logical forms are publicly available at https: //github.com/torobomath. The manually translated logical forms were formulated in a higher-order semantic language introduced later in the paper. The translation was done as faithfully as possible to the original wordings of the problems. They thus keep the inherent logical structures expressed in natural language. Table 2 lists several statistics of the UNIV problems in the English version and their manual formalization. For comparison, the statistics of three other benchmarks are also listed. JOBS and GEOQUERY are collections of natural language queries against databases. They have been widely used as benchmarks for semantic parsing (e.g., Tang and Mooney, 2001; Zettlemoyer and Collins, 2005, 2007; Kwiatkowski et al., 2010, 2011; Liang et al., 2011). The queries are annotated with logical forms in Prolog. We converted them to equivalent higher-order formulas to collect comparable statistics. GEOMETRY is a collection of SAT geometry questions compiled by Seo et al. (2015). We formalized the GEOMETRY questions1 in our semantic language in the same way as UNIV. In Table 2, the first column lists the number of problems. The next three provide statistics of the problem texts: average number of words and sentences in a problem (‘Avg. tokens’ and ‘Avg. sents’), and the number of unique words in the whole dataset.2 They reveal that the sentences in UNIV are significantly longer than the others and more than three sentences have to be correctly processed for a problem. The remaining columns provide the statistics about the logical complexities of the problems. ‘Atoms’ stands for the average number of the occurrences of predicates per problem. The next three columns list the number of variables bound by ∃, ∀, and λ. We count sequential occurrences of the same binder as one. The columns for ∧, ∨, ¬, and →list the average number of them per problem.3 We can see UNIV includes a wider variety of quantifiers and connectives than the others. The final column lists the numbers of unique ‘sketches’ of the logical forms in the dataset. What 1Including all conditions expressed in the diagrams. 2 All the math formulas in the texts were replaced with a special token “MATH” before counting words. 3 ∧and ∨was counted as operators with arbitrary arity. E.g., there is only one ∧in A ∧B ∧C. 2133 we call ‘sketch’ here is a signature that encodes the overall structure of a logical form. Table 3 shows the top four most frequent sketches observed in the datasets. In a sketch, P stands for a (conjunction of) predicate(s) and f stands for a term. ∃, ∀, and λ stand for (immediately nested sequence of) the binders. To obtain the sketch of a formula φ, we first replace all the predicate symbols in φ to P and function symbols and constants to f. We then eliminate all variables in φ and ‘flatten’ it by applying the following rewriting rules to the sub-formulas in φ in the bottom-up order: f(..., f(α1, α2, ..., αn), ...) ⇒f(..., α1, α2, ..., αn, ...) P(..., f(α1, α2, ..., αn), ...) ⇒P(..., α1, α2, ..., αn, ...) α ∨α ∨β ⇒α ∨β, α ∧α ⇒α ∃∃ψ ⇒∃ψ, ∀∀ψ ⇒∀ψ, λλψ ⇒λψ Finally, we sort the arguments of Ps and fs and remove the duplicates among them. For instance, to obtain the sketch of the following formula: ∀k∀m  maximum(m, set(λe.(e < k))) →k −1 ≤m ∧m < k  , we replace the predicate/function symbols as in: ∀k∀m  P(m, f(λe.P(e, k))) →P(f(k, f), m) ∧P(m, k)  , and then eliminate the variables to have: ∀∀(P(f(λP)) →P(f(f)) ∧P), and finally flatten it to: ∀(P(λP) →P). Table 3 shows that a wide variety of structures are found in UNIV while other data sets are dominated by a small number of structures. Table 4 presents some of less frequent sketches found in UNIV (DEV). In actuality, 67% of the unique sketches found in UNIV (DEV) occur only once in the dataset. These statistics suggest that the distribution of the logical structures found in UNIV, and math text in general, is very long-tailed. 4 A Type System for Pre-university Math Our semantic language is a higher-order logic (lambda calculus) with parametric polymorphism. Table 5 presents the types in the language. The atomic types are defined so that they capture the selectional restriction of verbs and other truth values Bool numbers Z (integers), Q (rationals), R (reals), C (complex) polynomials Poly single variable functions R2R (R→R), C2C (C→C) single variable equations EqnR (in R), EqnC (in C) points in 2D/3D space 2d.Point, 3d.Point geometric objects 2d.Shape, 3d.Shape vectors and matrices 2d.Vector, 3d.Vector matrices 2d.Matrix, 3d.Matrix angles 2d.Angle, 3d.Angle number sequences Seq cardinals and ordinals Card, Ord ratios among numbers Ratio limit values of functions LimitVal integer division QuoRem polymorphic containers SetOf(α), ListOf(α) polymorphic tuples Pair(α, β), Triple(α, β, γ) Table 5: Types defined in the semantic language argument-taking phrases as precisely as possible. For instance, an equation in real domain, e.g., x2 −1 = 0, can be regarded as a set of reals, i.e., {x | x2 −1 = 0}. However, we never say ‘a solution of a set.’ We thus discriminate an equation from a set in the type system even though the concept of equation is mathematically dispensable. Entities of equation and set are built by constructor functions that take a higher-order term as the argument as in eqn(λx.x2 −1) and set(λx.x2 −1). Related concepts such as ‘solution’ and ‘element’ are defined by the axioms for corresponding function and predicate symbols: ∀f∀x(solution(x, eqn(f)) ↔fx) ∀s∀x(element(x, set(s)) ↔sx). Distinction of cardinal numbers (Card) and ordinal numbers (Ord), and the introduction of ‘integer division’ type (QuoRem) are also linguistically motivated. The former is necessary to capture the difference between, e.g., ‘kth integer in n1, n2, . . . , nm’ and ‘k integers in n1, n2, . . . , nm.’ An object of type QuoRem is conceptually a pair of integers that represent the quotient and the remainder of integer division. It is linguistically distinct from the type of Pair(Z,Z) because, e.g., in Select a pair of integers (n, m) and divide n by m. If the remainder (of φ) is zero, ... the null (i.e., omitted) pronoun φ has ‘the result of division n/m’ as its antecedent but not (n, m). Polymorphism is a mandatory part of the language. Especially, the semantics of plural noun 2134 > > When S/(S\NP)/Sa : λP.λQ.π2(P) →Q(π1(P)) any k in K is divided by m, Sa : (quorem(k, m), (∃k; k ∈K)) S/(S\NP) : λQ.(∃k; k ∈K) →Q(quorem(k, m)) > the quotient T\NP/(T\NP) : λP.λx.P(quo of(x)) is 3. S\NP : λx.(x = 3) S\NP : λx.quo of(x) = 3 S : (∃k; k ∈K) →quo of(quorem(k, m)) = 3 Figure 3: Sketch of the derivation tree for a sentence including an action verb and quantification phrases is expressed by polymorphic lists and tuples: e.g., ‘the radii of the circles C1, C2, and C3’ is of type ListOf(R) and ‘the function f and its maximum value’ is of type Pair(R2R,R). 5 Lexicon and Grammar 5.1 Combinatory Categorial Grammar An instance of CCG grammar consists of a lexicon and a small number of combinatory rules. A lexicon is a set of lexical items, each of which associates a word surface form with a syntactic category and a semantic function: e.g., sum :: NP/PP : λx.sum of(x) intersects :: S\NP/PP : λy.λx.intersect(x, y) A syntactic category is one of atomic categories, such as NP, PP, and S, or a complex category in the form of X/Y or X\Y, where X and Y are syntactic categories. The syntactic categories and the semantic functions of constituents are combined by applying combinatory rules. The most fundamental rules are forward (>) and backward (<) application: > X/Y : f Y : x X : fx < Y : x X\Y : f X : fx The atomic categories are further classified by features such as num(ber) and case of noun phrases. In the current paper, the features are written as in NP[num=pl,case=acc]. 5.2 A Japanese CCG Grammar and Lexicon We developed a Japanese CCG following the analysis of basic constructions by Bekki (2010) but significantly extending it by covering various phenomena related to copula verbs, action verbs, argument-taking nouns, appositions and so forth. The semantic functions are defined in the format of a higher-order version of dynamic predicate logic (Eijck and Stokhof, 2006). The dynamic property is necessary to analyze semantic phenomena related to quantifications, such as donkey anaphora. In the following examples, we use English instead of Japanese and the standard notation of higher-order logic for the sake of readability. We added two atomic categories, Sn and Sa, to the commonly used S, NP, and N. Category Sn is assigned to a proposition expressed as a math formula, such as ‘x > 0’. Semantically it is of type Bool but syntactically it behaves both like a noun phrase and a sentence. Category Sa is assigned to a sentence where the main verb is an action verb such as add and rotate. Such a sentence introduces the result of the action as a discourse entity (i.e., what can be an antecedent of coreferential expressions). The action verbs can also mediate quantification as in: When any k∈K is divided by m, the quotient is 3. ∀k(k ∈K →quo of(quorem(k, m)) = 3) where quorem(k, m) represents the result of the division (i.e., the pair of the quotient and the remainder) and quo of is a function that extracts the quotient from it. To handle such phenomena, we posit the semantic type of Sa as Pair(α, Bool) where the two components respectively bring the result of an action and the condition on it (including quantification). Fig. 3 presents a derivation tree for the above example.4 The atomic category NP, N, and Sa in our grammar have type feature. Its value is one of the types defined in the semantic language or a type variable when the entity type is underspecified. The lexical entry for ‘(an integer) divides (an integer)’ and ‘(a set) includes (an element)’ would thus have the following categories (other features than type are not shown): divides :: S\NP[type=Z]/NP[type=Z] includes :: S\NP[type=SetOf(α)]/NP[type=α] When defining a lexical item, we don’t have to explicitly specify the type features in most cases. They can be usually inferred from the definition of 4 In Fig. 3, the semantic part is in the dynamic logic format as in our real grammar where the dynamic binding (∃x; φ) →ψ is interpreted as ∀x(φ →ψ) in the standard predicate logic. Following our analysis of an analogous construction in Japanese, the null pronoun after ‘the quotient’ is filled by analysing the second clause as including a gap rather than filling it by zero-pronoun resolution. 2135 the semantic function. In the above example, divides will have λy.λx.(x|y) and includes will have λy.λx.(y ∈x) as their semantic functions. For both cases, the type feature of the NP arguments can be determined from the type definitions of the operators | and ∈in the ontology. The lexicon currently includes 54,902 lexical items for 8,316 distinct surface forms, in which 5,094 lexical items for 1,287 surface forms are for function words and functional multi-word expressions. The number of unique categories in the lexicon is 10,635. When the type features are ignored, there are still 4,026 distinct categories. 6 Math Expression Analysis The meaning of a math expression is composed with the semantic functions of surrounding words to produce a logical form. We dynamically generate lexical items for each math expression in a problem. Consider the following sentence including two ‘equations’: If a2−4=0, then x2+ax+1=0 has a real solution. The latter, x2+ax+1 = 0, should receive a lexical item of a noun phrase, NP : eqn(λx.x2 + a + 1), but the former, a2−4 = 0, should receive category S since it denotes a proposition. Such disambiguation is not always possible without semantic analysis of the text. We thus generate more than one lexical item for ambiguous expressions and let the semantic parser make a choice. To generate the lexical items, we first collect appositions to the math expressions, such as ‘integer n and m’ and ‘equation x2 + a = 0,’ and use them as the type constraints on the variables and the compound expressions. Compound expressions are then parsed with an operator precedence parser (Aho et al., 2006). Overloaded operators, such as + for numbers and vectors, are resolved using the type constrains whenever possible. Finally, we generate all possible interpretations of the expressions and select appropriate syntactic categories. We have seen only three categories of math expressions: NP, Sn, and T/(T\NP). The last one is used for a NP with post-modification, as in: > n > 0 T/(T\NP) : λP.(n > 0 ∧P(n)) is an even number S\NP : λx.(even(x)) S : n > 0 ∧even(n) Naomi-NOM garden-LOC walk-PAST Naomi ga niwa o arui ta 𝑁𝑎𝑜𝑚𝑖 𝑁𝑃 𝑔𝑎 𝑁𝑃∖𝑁𝑃 𝑁𝑃 𝑛𝑖𝑤𝑎 𝑁𝑃 𝑜 𝑁𝑃∖𝑁𝑃 𝑁𝑃 𝑎𝑟𝑢𝑖 𝑆∖𝑁𝑃∖𝑁𝑃 𝑆∖𝑁𝑃 𝑆 𝑡𝑎 𝑆∖𝑆 𝑆 (Naomi walked in the garden.) Figure 4: Bunsetsu dependency structure (top) and CCG derivation tree (bottom) 7 Two-step Semantic Parsing Two central issues in parsing are the cost of the search and the accuracy of disambiguation. Supervised learning is commonly used to solve both. It is however very costly to create the training data by manually annotating a large number of sentences with CCG trees. Past studies have tried to bypass it by so-called weak supervision, where a parser is trained only with the logical form (e.g., Kwiatkowski et al. 2011) or even only with the answers to the queries (e.g., Liang et al. 2011). Although the adaptation of such methods to the pre-university math data is an interesting future direction, we developed yet another approach based on a hybrid of shallow dependency parsing and the detailed CCG grammar. The syntactic structure of Japanese sentences has traditionally been analyzed based on the relations among word chunks called bunsetsus. A bunsetsu consists of one or more content words followed by zero or more function words. The dependencies among bunsetsus mostly correspond to the predicate-argument and interclausal dependencies (Fig. 4). The dependency structure hence matches the overall structure of a CCG tree only leaving the details unspecified. We derive a full CCG-tree by using a bunsetsu dependency tree as a constraint. We assume: (i) the fringe of each sub-tree in the dependency tree has a corresponding node in the CCG tree. We call such a node in the CCG tree ‘a matching node.’ We further assume: (ii) a matching node is combined with another CCG tree node whose span includes at least one word in the head bunsetsu of the matching node. Fig. 5 presents an example of a sentence consisting of four bunsetsus (rounded squares), each of which contains two words. In the figure, the i-th cell in the k-th row from the bottom is the CKY cell for the span from i-th to 2136 w1 w2 w3 w4 w5 w6 w7 w8 Figure 5: Restricted CKY parsing based on a shallow dependency structure (i+k-1)-th words. Under the two assumptions, we only need to fill the hatched cells given the dependency structure shown below the CKY chart. The hatched cells with a white circle indicate the positions of the matching nodes. Even under the constraint of a dependency tree, it is impractical to do exhaustive search. We use beam search based on a simple score function on the chart items that combines several features such as the number of atomic categories in the item. We also use N-best dependency trees to circumvent the dependency errors. The restricted CKY parsing is repeated on the N-best dependency trees until a CCG tree is obtained. Our hope is to reject a dependency error as violation of the syntactic and semantic constraints encoded in the CCG lexicon. In the experiment, we used a Japanese dependency parser developed by Kudo and Matsumoto (2002). We modified it to produce N-best outputs and used up to 20-best trees per sentence. 8 Global Type Coherency The well-typedness of the logical form is usually guaranteed by the combinatory rules. However, they do not always guarantee the type coherency among the interpretations of the math expressions. For instance, consider the following derivation: > if x + y ∈U, S/S : λP.(addR(x, y) ∈U →P) then x + z ∈V. S : addV(x, y) ∈V S : addR(x, y) ∈U →addV(x, z) ∈V The + symbol is interpreted as the addition of real numbers (addR) in the first clause but that of vectors (addV) in the second one. The logical form is not typable because the two occurrences of x must have different types. The forward application rule does not reject this derivation since the categories of the two clauses perfectly match the rule schema. We can reject such inconsistency by doing type checking on the logical form at every step of the Algorithm 1 Global type coherence check procedure PARSEPROBLEM Envs ←∅; AllDerivs ←[] for each sentence s in the problem do Chart ←INITIALIZECKYCHART(s, Envs) Derivs ←TWOSTEPPARSING(s, Chart) Envs ←UPDATEENVIRONMENTS(Envs, Derivs) AllDerivs ←AllDerivs ⊕[Derivs] return AllDerivs // s: a sentence; Envs: a set of environments procedure INITIALIZECKYCHART(s, Envs) Chart ←empty CKY chart for each token t in s do for each lexical item C : f for t do // C: category, f: semantic function if t is a math expression then for each environment Γ ∈Envs do if Γ is unifiable with FV(f) then add (C, Γ ⊔FV(f)) to Chart else // t is a normal word add (C, ∅) to Chart return Chart FV(f): the environment that maps the free variables in a semantic function f to their principal types determined by type inference on f. // Envs: a set of environments; Derivs: derivations trees procedure UPDATEENVIRONMENTS(Envs, Derivs) NewEnvs ←∅// environments for the next sentence for each derivation d ∈Derivs do Γ ←the environment at the root of d if Γ ̸= ∅then // update the environments NewEnvs ←NewEnvs ∪{Γ} else // no update: there was no math expression NewEnvs ←NewEnvs ∪Envs // eliminate those subsumed by other environments return MOSTGENERALENVIRONMENTS(NewEnvs) derivation. It is however quite time consuming because we cannot use dynamic programming any more and need to do type checking on numerous chart items. Furthermore, such type inconsistency may happen across sentences. Instead, we consider the type environment while parsing. A type environment, written as {v1 : T1, v2 : T2, . . . }, is a finite function from variables to type expressions. A pair v : T means that the variable v must be of type T or its instance (e.g., SetOf(R) is an instance of SetOf(α)). For example, the logical form of the first clause of the above sentence is typable under {x:R, y :R, z :α, U :SetOf(R), V :β}, but that of the second clause isn’t. Please refer, e.g., to (Pierce, 2002) for the formal definitions. Two environments Γ1 and Γ2 are unifiable iff there exists a substitution σ that maps the type variables in Γ1 and Γ2 to some type expressions so that Γ1σ = Γ2σ holds. We write Γ1 ⊔Γ2 for the result of such substitution (i.e., unification) with the 2137 < < n (NP[α], {n : α}) > divides (S\NP[Z]/NP[Z], ∅) 12 (NP[Z], ∅) (S\NP[Z], ∅) (S, {n : Z}) > iff (S\S/Sn, ∅) n ∈U (Sn, {n : β, U : SetOf(β)}) (S\S, {n : β, U : SetOf(β)}) (S, {n : Z, U : SetOf(Z)}) Figure 6: CCG parsing with type environment Dataset Correct TimeWrong No Parse out RCF failure DEV 27.6% 10.9% 12.1% 12.1% 37.4% TEST 11.4% 1.8% 11.4% 6.8% 68.6% (Correct: correct answer; Timeout: reasoning did not finish in 10 min; Wrong: wrong answer; No RCF: no RCF formula was obtained by rewriting the logical form; Parse failure: at least one sentence in the problem did not receive a CCG tree) Table 6: Result of end-to-end problem solving Dataset Dep. Parsed Sentences (%) train N=1 N=5 N=10 N=20 DEV News 48.9 69.1 72.6 76.6 News+Math 70.5 81.6 84.6 86.4 TEST News 46.6 58.7 61.9 64.7 News+Math 59.3 65.3 66.9 68.3 Table 7: Fraction of sentences on which a CCG tree was obtained in top N dependency trees most general σ (most general unifier, mgu). We associate a type environment with each chart item and refine it through parsing. The type constraints implied in a discourse are accumulated in the environment and block the generation of incoherent derivations (Algorithm 1). Fig. 6 presents an example of a parsing result, in which the type constraints implied in the two clauses are unified at the root and the type of U is determined. When we apply a combinatory rule, we first check if the environments of the child chart items are unifiable. If so, we put the unified environment in the parent item and apply the unifier to the type features in the parent category. For instance, the forward application rule is revised as follows: (X/Y, Γ1) + (Y, Γ2) →(Xσ, Γ1 ⊔Γ2), where σ is the mgu of Γ1 and Γ2 and Xσ means the application of σ to the type features in X.5 5 To be precise, we also consider the type constraints induced through the unification of the categories. It can be seen in the derivation step for “n divides 12” in Fig. 6, where the new constraint n :Z is induced by the unification of NP[α] and NP[Z] and merged into the environment of the parent. 9 Experiments and Analysis This section presents the overall performance of the current end-to-end system and demonstrates the effectiveness of the proposed parsing techniques. We also present an analysis of the failures. Table 6 presents the result of end-to-end problem solving on the UNIV data. It shows the failure in the semantic parsing is a major bottleneck in the current system. Since a problem in UNIV includes more than three sentences on average, parsing a whole problem is quite a high bar for a semantic parser. It is however necessary to solve it by the nature of the task. Once a problem-level logical form was produced, the system yielded a correct solution for 44% of such problems in DEV and 36% in TEST. Table 7 lists the fraction of the sentences on which the two-step parser produced a CCG tree within top-N dependency trees. We compared the results obtained with the dependency parser trained only on a news corpus (News) (Kurohashi and Nagao, 2003), which is annotated with bunsetsu level dependencies, and that trained additionally with a math problem corpus consisting of 6,000 sentences6 (News+Math). The math problem corpus was developed according to the same annotation guideline for the news corpus. The attachment accuracy of the dependency parser was 84% on math problem text when trained only on the news corpus but improved to 94% by the addition of the math problem corpus. The performance gain by increasing N is more evident in the results with the News parser than that with the News+Math parser. It suggests the grammar properly rejected wrong dependency trees, which were ranked higher by the News parser. The effect of the additional training is very large at small Ns and still significant at N = 20. It means that we successfully boosted both the speed and the success rate of CCG parsing only with the shallow dependency annotation on in-domain data. 6 No overlap with DEV and TEST sections of UNIV. 2138 Dataset Parsing w/ Typing Correct type env. failure (%) answer (%) DEV no 9.8% 21.8% yes 0.6% 27.6% TEST no 8.6% 8.6% yes 0.0% 11.4% Table 8: Effect of parsing with type environment Freq. Reason for the parse failures (on TEST-2007) 17 Unknown usage of known content words 9 Unknown content words 8 Errors in coreference resolution 4 Missing math expression interpretaions 3 Unknown usage of known function words 3 Unknown function words 2 No correct dependency tree in 20-best Table 9: Reasons for the parse failures Table 8 shows the effect of CCG parsing with type environments. The column headed ‘Typing failure’ is the fraction of the problems on which no logical form was obtained due to typing failure. Parsing with type environment eliminated almost all such failures and significantly improved the number of correct answers. The remaining type failure was due to beam thresholding where a necessary derivation fell out of the beam. Table 9 lists the reasons for the parse failures on 1/4 of the TEST section (the problems taken from exams on 2007). In the table, “unknown usage” means a missing lexical item for a word already in the lexicon. “Unknown word” means no lexical item was defined for the word. Collecting unknown usages (especially that of a function word) is much harder than just compiling a list of words. Our experience in the lexicon development tells us that once we find a usage example, in the large majority of the cases, it is not difficult to write down its syntactic category and semantic function. Table 9 suggests that we can efficiently detect and collect unknown word usages through parsing failures on a large raw corpus of math problems. Table 10 presents the accuracy of the sentenceand problem-level logical forms produced on the year 1999 subset of DEV and the year 2007 subset of TEST. Although the recall on the unseen test data is not as high as we hope, the high precision of the sentence-level logical forms is encouraging. Table 11 provides the counts of the error types found in the wrong sentence-level logical forms produced on DEV-1999 and TEST-2007. It reveals the majority of the errors are related to the choice of quantifier (∃, ∀, or free) and logical opDataset Precision Recall sentenceDEV-1999 83% (64/77) 72% (64/ 89) level TEST-2007 88% (64/73) 56% (64/114) problemDEV-1999 75% (18/24) 45% (18/40) level TEST-2007 50% (8/16) 15% (8/53) Table 10: Accuracy of logical forms Error type DEVTEST1999 2007 Bind a variable or leave it free 6 2 Wrong math expr. interpretaion 6 1 Quantifier choice 0 3 Quantifier scope 1 1 Logical connective choice 1 1 Logical connective scope 1 0 Others 1 2 Table 11: Types of errors in the logical forms erators (e.g., →vs. ↔) as well as the determination of their scopes. Meanwhile, we did not find an error related to the predicate-argument structure of a logical form. This fact and the results in Table 6 suggest that the selectional restrictions, encoded in the lexicon, properly rejected nonsensical predicate-argument relations. Our next step is to introduce a more sophisticated disambiguation model on top of the grammar, enjoying the properly confined search space. 10 Conclusion We have explained why the task of end-to-end math problem solving matters for a practical theory of natural language semantics and introduced the semantic parsing of pre-university math problems as a novel benchmark. The statistics of the benchmark data revealed that it includes far more complex semantic structures than the other benchmarks. We also presented an overview of an endto-end problem solving system and described two parsing techniques motivated by the scarcity of the annotated data and the need for the type coherency of the analysis. Experimental results demonstrated the effectiveness of the proposed techniques and showed the accuracy of the sentence-level logical form was 88% precision and 56% recall. Our future work includes the expansion of the lexicon with the aid of the semantic parser and the development of a disambiguation model for the binding and scoping structures. 2139 References Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman. 2006. Compilers: Principles, Techniques, and Tools (2nd Edition). Addison-Wesley Longman Publishing Co., Inc. Daisuke Bekki. 2010. Nihongo-bunpou no keishikiriron (in Japanese). Kuroshio Shuppan. Daniel Gureasko Bobrow. 1964. Natural language input for a computer problem solving system. Ph.D. thesis, Massachusetts Institute of Technology. Eugene Charniak. 1969. Computer solution of calculus word problems. In Proceedings of the 1st International Joint Conference on Artificial Intelligence. San Francisco, CA, USA, pages 303–316. http://dl.acm.org/citation.cfm?id=1624562.1624593. Donald Davidson. 1967. Truth and meaning. Synthese 17(1):304–323. Jan Van Eijck and Martin Stokhof. 2006. The gamut of dynamic logic. In Handbook of the History of Logic, Volume 6 Logic and the Modalities in the Twentieth Century, Elsevier, pages 499–600. Gottlob Frege. 1892. ¨Uber Sinn und Bedeutung. Zeitschrift f¨ur Philosophie und philosophische Kritik 100:25–50. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. pages 523–533. http://aclweb.org/anthology/D/D14/D14-1058.pdf. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics 3:585–597. https://transacl.org/ojs/index.php/tacl/article/view/692. Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In CoNLL 2002: Proceedings of the 6th Conference on Natural Language Learning 2002 (COLING 2002 Post-Conference Workshops). pages 63–69. http://aclweb.org/anthology/W/W02/W022016.pdf. Sadao Kurohashi and Makoto Nagao. 2003. Building A Japanese Parsed Corpus, Springer Netherlands, Dordrecht, pages 249–260. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. pages 271–281. http://www.aclweb.org/anthology/P14-1026. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic ccg grammars from logical form with higher-order unification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. pages 1223–1233. http://dl.acm.org/citation.cfm?id=1870658.1870777. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. pages 1512–1523. http://dl.acm.org/citation.cfm?id=2145432.2145593. Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. pages 590– 599. http://www.aclweb.org/anthology/P11-1060. Arindam Mitra and Chitta Baral. 2016. Learning to use formulas to solve simple arithmetic problems. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pages 2144–2153. http://www.aclweb.org/anthology/P16-1202. Richard Montague. 1970a. English as a formal language. In Bruno Visentini, editor, Linguaggi nella Societa e nella Tecnica, Edizioni di Communit`a, pages 189–224. Richard Montague. 1970b. Universal grammar. Theoria 36(3):373–398. https://doi.org/10.1111/j.17552567.1970.tb00434.x. Richard Montague. 1973. The proper treatment of quantification in ordinary english. In Patrick Suppes, Julius Moravcsik, and Jaakko Hintikka, editors, Approaches to Natural Language, Dordrecht, pages 221–242. Benjamin C. Pierce. 2002. Types and Programming Languages. MIT Press. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1743–1752. http://aclweb.org/anthology/D15-1202. Bertrand Russell. 1905. On denoting. Mind 14(56):479–493. http://www.jstor.org/stable/2248381. Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1466–1476. http://aclweb.org/anthology/D15-1171. 2140 Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1132–1142. http://aclweb.org/anthology/D15-1135. Mark Steedman. 2001. The Syntactic Process. Bradford Books. MIT Press. Mark Steedman. 2012. Taking Scope - The Natural Semantics of Quantifiers. MIT Press. http://mitpress.mit.edu/books/taking-scope. Lappoon R. Tang and Raymond J. Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In Proceedings of the 12th European Conference on Machine Learning. pages 466–477. http://www.cs.utexas.edu/users/ailab/?tang:ecml01. Alfred Tarski. 1936. The concept of truth in formalized languages. In A. Tarski, editor, Logic, Semantics, Metamathematics, Oxford University Press, pages 152–278. Alfred Tarski. 1944. The semantic conception of truth: and the foundations of semantics. Philosophy and Phenomenological Research 4(3):341–376. http://www.jstor.org/stable/2102968. Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang, and Wen-tau Yih. 2016. Learning from explicit and implicit supervision jointly for algebra word problems. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 297–306. https://aclweb.org/anthology/D16-1029. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pages 678–687. http://www.aclweb.org/anthology/D07-1071. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence. pages 658– 666. Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015. Learn to solve algebra word problems using quadratic programming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 817–822. http://aclweb.org/anthology/D15-1096. 2141
2017
195
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 11–22 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1002 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 11–22 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1002 Neural End-to-End Learning for Computational Argumentation Mining Steffen Eger†‡, Johannes Daxenberger†, Iryna Gurevych†‡ †Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universitt Darmstadt ‡Ubiquitous Knowledge Processing Lab (UKP-DIPF) German Institute for Educational Research and Educational Information http://www.ukp.tu-darmstadt.de Abstract We investigate neural techniques for endto-end computational argumentation mining (AM). We frame AM both as a tokenbased dependency parsing and as a tokenbased sequence tagging problem, including a multi-task learning setup. Contrary to models that operate on the argument component level, we find that framing AM as dependency parsing leads to subpar performance results. In contrast, less complex (local) tagging models based on BiLSTMs perform robustly across classification scenarios, being able to catch longrange dependencies inherent to the AM problem. Moreover, we find that jointly learning ‘natural’ subtasks, in a multi-task learning setup, improves performance. 1 Introduction Computational argumentation mining (AM) deals with finding argumentation structures in text. This involves several subtasks, such as: (a) separating argumentative units from non-argumentative units, also called ‘component segmentation’; (b) classifying argument components into classes such as “Premise” or “Claim”; (c) finding relations between argument components; (d) classifying relations into classes such as “Support” or “Attack” (Persing and Ng, 2016; Stab and Gurevych, 2017). Thus, AM would have to detect claims and premises (reasons) in texts such as the following, where premise P supports claim C: Since it killed many marine livesP , ::::::: tourism ::: has:::::::::: threatened:::::: natureC . Argument structures in real texts are typically much more complex, cf. Figure 1. While different research has addressed different subsets of the AM problem (see below), the ultimate goal is to solve all of them, starting from unannotated plain text. Two recent approaches to this end-to-end learning scenario are Persing and Ng (2016) and Stab and Gurevych (2017). Both solve the end-to-end task by first training independent models for each subtask and then defining an integer linear programming (ILP) model that encodes global constraints such as that each premise has a parent, etc. Besides their pipeline architecture the approaches also have in common that they heavily rely on hand-crafted features. Hand-crafted features pose a problem because AM is to some degree an “arbitrary” problem in that the notion of “argument” critically relies on the underlying argumentation theory (Reed et al., 2008; Biran and Rambow, 2011; Habernal and Gurevych, 2015; Stab and Gurevych, 2017). Accordingly, datasets typically differ with respect to their annotation of (often rather complex) argument structure. Thus, feature sets would have to be manually adapted to and designed for each new sample of data, a challenging task. The same critique applies to the designing of ILP constraints. Moreover, from a machine learning perspective, pipeline approaches are problematic because they solve subtasks independently and thus lead to error propagation rather than exploiting interrelationships between variables. In contrast to this, we investigate neural techniques for end-to-end learning in computational AM, which do not require the hand-crafting of features or constraints. The models we survey also all capture some notion of “joint”—rather than “pipeline”—learning. We investigate several approaches. First, we frame the end-to-end AM problem as a dependency parsing problem. Dependency parsing may be considered a natural choice for AM, because argument structures often form trees, 11 or closely resemble them (see §3). Hence, it is not surprising that ‘discourse parsing’ (Muller et al., 2012) has been suggested for AM (Peldszus and Stede, 2015). What distinguishes our approach from these previous ones is that we operate on the token level, rather than on the level of components, because we address the end-toend framework and, thus, do not assume that nonargumentative units have already been sorted out and/or that the boundaries of argumentative units are given. Second, we frame the problem as a sequence tagging problem. This is a natural choice especially for component identification (segmentation and classification), which is a typical entity recognition problem for which BIO tagging is a standard approach, pursued in AM, e.g., by Habernal and Gurevych (2016). The challenge in the end-to-end setting is to also include relations into the tagging scheme, which we realize by coding the distances between linked components into the tag label. Since related entities in AM are oftentimes several dozens of tokens apart from each other, neural sequence tagging models are in principle ideal candidates for such a framing because they can take into account long-range dependencies—something that is inherently difficult to capture with traditional feature-based tagging models such as conditional random fields (CRFs). Third, we frame AM as a multi-task (tagging) problem (Caruana, 1997; Collobert and Weston, 2008). We experiment with subtasks of AM—e.g., component identification—as auxiliary tasks and investigate whether this improves performance on the AM problem. Adding such subtasks can be seen as analogous to de-coupling, e.g., component identification from the full AM problem. Fourth, we evaluate the model of Miwa and Bansal (2016) that combines sequential (entity) and tree structure (relation) information and is in principle applicable to any problem where the aim is to extract entities and their relations. As such, this model makes fewer assumptions than our dependency parsing and tagging approaches. The contributions of this paper are as follows. (1) We present the first neural end-to-end solutions to computational AM. (2) We show that several of them perform better than the state-of-theart joint ILP model. (3) We show that a framing of AM as a token-based dependency parsing problem is ineffective—in contrast to what has been proposed for systems that operate on the coarser component level and that (4) a standard neural sequence tagging model that encodes distance information between components performs robustly in different environments. Finally, (5) we show that a multi-task learning setup where natural subtasks of the full AM problem are added as auxiliary tasks improves performance.1 2 Related Work AM has applications in legal decision making (Palau and Moens, 2009; Moens et al., 2007), document summarization, and the analysis of scientific papers (Kirschner et al., 2015). Its importance for the educational domain has been highlighted by recent work on writing assistance (Zhang and Litman, 2016) and essay scoring (Persing and Ng, 2015; Somasundaran et al., 2016). Most works on AM address subtasks of AM such as locating/classifying components (Florou et al., 2013; Moens et al., 2007; Rooney et al., 2012; Knight et al., 2003; Levy et al., 2014; Rinott et al., 2015). Relatively few works address the full AM problem of component and relation identification. Peldszus and Stede (2016) present a corpus of microtexts containing only argumentatively relevant text of controlled complexity. To our best knowledge, Stab and Gurevych (2017) created the only corpus of attested high quality which annotates the AM problem in its entire complexity: it contains token-level annotations of components, their types, as well as relations and their types. 3 Data We use the dataset of persuasive essays (PE) from Stab and Gurevych (2017), which contains student essays written in response to controversial topics such as “competition or cooperation—which is better?” Train Test Essays 322 80 Paragraphs 1786 449 Tokens 118648 29538 Table 1: Corpus statistics As Table 1 details, the corpus consists of 402 essays, 80 of which are reserved for testing. The an1Scripts that document how we ran our experiments are available from https://github.com/UKPLab/ acl2017-neural_end2end_AM. 12 MC1 MC2 C1 C2 C3 P1 P2 P3 P4 P5 P6 MC1 C1 P1 P2 P3 P4 C2 P5 P6 C3 MC2 Figure 1: Bottom: Linear argumentation structure in a student essay. The essay is comprised of nonargumentative units (square) and argumentative units of different types: Premises (P), claims (C) and major claims (MC). Top: Relationsships between argumentative units. Solid arrows are support (for), dashed arrows are attack (against). notation distinguishes between major claims (the central position of an author with respect to the essay’s topic), claims (controversial statements that are either for or against the major claims), and premises, which give reasons for claims or other premises and either support or attack them. Overall, there are 751 major claims, 1506 claims, and 3832 premises. There are 5338 relations, most of which are supporting relations (>90%). The corpus has a special structure, illustrated in Figure 1. First, major claims relate to no other components. Second, claims always relate to all other major claims.2 Third, each premise relates to exactly one claim or premise. Thus, the argument structure in each essay is—almost—a tree. Since there may be several major claims, each claim potentially connects to multiple targets, violating the tree structure. This poses no problem, however, since we can “loss-lessly” re-link the claims to one of the major claims (e.g., the last major claim in a document) and create a special root node to which the major claims link. From this tree, the actual graph can be uniquely reconstructed. There is another peculiarity of this data. Each essay is divided into paragraphs, of which there are 2235 in total. The argumentation structure is completely contained within a paragraph, except, possibly, for the relation from claims to major claims. Paragraphs have an average length of 66 tokens and are therefore much shorter than essays, which have an average length of 368 tokens. Thus, prediction on the paragraph level is easier than 2All MCs are considered as equivalent in meaning. prediction on the essay level, because there are fewer components in a paragraph and hence fewer possibilities of source and target components in argument relations. The same is true for component classification: a paragraph can never contain premises only, for example, since premises link to other components. 4 Models This section describes our neural network framings for end-to-end AM. Sequence Tagging is the problem of assigning each element in a stream of input tokens a label. In a neural context, the natural choice for tagging problems are recurrent neural nets (RNNs) in which a hidden vector representation ht at time point t depends on the previous hidden vector representation ht−1 and the input xt. In this way, an infinite window (“long-range dependencies”) around the current input token xt can be taken into account when making an output prediction yt. We choose particular RNNs, namely, LSTMs (Hochreiter and Schmidhuber, 1997), which are popular for being able to address vanishing/exploding gradients problems. In addition to considering a left-to-right flow of information, bidirectional LSTMs (BL) also capture information to the right of the current input token. The most recent generation of neural tagging models add label dependencies to BLs, so that successive output decisions are not made independently. This class of models is called BiLSTM13 CRF (BLC) (Huang et al., 2015). The model of Ma and Hovy (2016) adds convolutional neural nets (CNNs) on the character-level to BiLSTMCRFs, leading to BiLSTM-CRF-CNN (BLCC) models. The character-level CNN may address problems of out-of-vocabulary words, that is, words not seen during training. AM as Sequence Tagging: We frame AM as the following sequence tagging problem. Each input token has an associated label from Y, where Y = {(b, t, d, s) | b ∈{B, I, O}, t ∈{P, C, MC, ⊥}, d ∈{. . . , −2, −1, 1, 2, . . . , ⊥}, s ∈{Supp, Att, For, Ag, ⊥}}. (1) In other words, Y consists of all four-tuples (b, t, d, s) where b is a BIO encoding indicating whether the current token is non-argumentative (O) or begins (B) or continues (I) a component; t indicates the type of the component (claim C, premise P, or major claim MC for our data). Moreover, d encodes the distance—measured in number of components—between the current component and the component it relates to. We encode the same d value for each token in a given component. Finally, s is the relation type (“stance”) between two components and its value may be Support (Supp), Attack (Att), or For or Against (Ag). We also have a special symbol ⊥that indicates when a particular slot is not filled: e.g., a nonargumentative unit (b = O) has neither component type, nor relation, nor relation type. We refer to this framing as STagT (for “Simple Tagging”), where T refers to the tagger used. For the example from §1, our coding would hence be: Since it killed many (O,⊥,⊥,⊥) (B,P,1,Supp) (I,P,1,Supp) (I,P,1,Supp) marine lives , tourism (I,P,1,Supp) (I,P,1,Supp) (O,⊥,⊥,⊥) (B,C,⊥,For) has threatened nature . (I,C,⊥,For) (I,C,⊥,For) (I,C,⊥,For) (O,⊥, ⊥, ⊥) While the size of the label set Y is potentially infinite, we would expect it to be finite even in a potentially infinitely large data set, because humans also have only finite memory and are therefore expected to keep related components close in textual space. Indeed, as Figure 2 shows, in our PE essay data set about 30% of all relations between components have distance −1, that is, they follow the claim or premise that they attach to. Overall, around 2/3 of all relation distances d lie in {−2, −1, 1}. However, the figure also illustrates that there are indeed long-range dependencies: distance values between −11 and +10 are observed in the data. 0 5 10 15 20 25 30 −10 −5 0 5 10 % d d Figure 2: Distribution of distances d between components in PE dataset. Multi-Task Learning Recently, there has been a lot of interest in so-called multi-task learning (MTL) scenarios, where several tasks are learned jointly (Søgaard and Goldberg, 2016; Peng and Dredze, 2016; Yang et al., 2016; Rusu et al., 2016; H´ector and Plank, 2017). It has been argued that such learning scenarios are closer to human learning because humans often transfer knowledge between several domains/tasks. In a neural context, MTL is typically implemented via weight sharing: several tasks are trained in the same network architecture, thereby sharing a substantial portion of network’s parameters. This forces the network to learn generalized representations. In the MTL framework of Søgaard and Goldberg (2016) the underlying model is a BiLSTM with several hidden layers. Then, given different tasks, each task k ‘feeds’ from one of the hidden layers in the network. In particular, the hidden states encoded in a specific layer are fed into a multiclass classifier fk. The same work has demonstrated that this MTL protocol may be successful when there is a hierarchy between tasks and ‘lower’ tasks feed from lower layers. AM as MTL: We use the same framework STagT for modeling AM as MTL. However, we in addition train auxiliary tasks in the network— each with a distinct label set Y′. Dependency Parsing methods can be classified into graph-based and transition-based approaches (Kiperwasser and Goldberg, 2016). Transitionbased parsers encode the parsing problem as a sequence of configurations which may be modified by application of actions such as shift, reduce, 14 etc. The system starts with an initial configuration in which sentence elements are on a buffer and a stack, and a classifier successively decides which action to take next, leading to different configurations. The system terminates after a finite number of actions, and the parse tree is read off the terminal configuration. Graph-based parsers solve a structured prediction problem in which the goal is learning a scoring function over dependency trees such that correct trees are scored above all others. Traditional dependency parsers used handcrafted feature functions that look at “core” elements such as “word on top of the stack”, “POS of word on top of the stack”, and conjunctions of core features such as “word is X and POS is Y” (see McDonald et al. (2005)). Most neural parsers have not entirely abandoned feature engineering. Instead, they rely, for example, on encoding the core features of parsers as low-dimensional embedding vectors (Chen and Manning, 2014) but ignore feature combinations. Kiperwasser and Goldberg (2016) design a neural parser that uses only four features: the BiLSTM vector representations of the top 3 items on the stack and the first item on the buffer. In contrast, Dyer et al. (2015)’s neural parser associates each stack with a “stack LSTM” that encodes their contents. Actions are chosen based on the stack LSTM representations of the stacks, and no more feature engineering is necessary. Moreover, their parser has thus access to any part of the input, its history and stack contents. AM as Dependency Parsing: To frame a problem as a dependency parsing problem, each instance of the problem must be encoded as a directed tree, where tokens have heads, which in turn are labeled. For end-to-end AM, we propose the framing illustrated in Figure 3. We highlight two design decisions, the remaining are analogous and/or can be read off the figure. • The head of each non-argumentative text token is the document terminating token END, which is a punctuation mark in all our cases. The label of this link is O, the symbol for non-argumentative units. • The head of each token in a premise is the first token of the claim or premise that it links to. The label of each of these links is (b, P, Supp) or (b, P, Att) depending on whether a premise “supports” or “attacks” a claim or premise; b ∈{B, I}. 1 2 3 4 5 6 7 8 9 10 11 12 O (B,P,Supp) (I,P,Supp) O (B,C,For) Figure 3: Dependency representation of sample sentence from §1. Links and selected labels. LSTM-ER Miwa and Bansal (2016) present a neural end-to-end system for identifying both entities as well as relations between them. Their entity detection system is a BLC-type tagger and their relation detection system is a neural net that predicts a relation for each pair of detected entities. This relation module is a TreeLSTM model that makes use of dependency tree information. In addition to de-coupling entity and relation detection but jointly modeling them,3 pretraining on entities and scheduled sampling (Bengio et al., 2015) is applied to prevent low performance at early training stages of entity detection and relation classification. To adapt LSTM-ER for the argument structure encoded in the PE dataset, we model three types of entities (premise, claim, major claim) and four types of relations (for, against, support, attack). We use the feature-based ILP model from Stab and Gurevych (2017) as a comparison system. This system solves the subtasks of AM—component segmentation, component classification, relation detection and classification— independently. Afterwards, it defines an ILP model with various constraints to enforce valid argumentation structure. As features it uses structural, lexical, syntactic and context features, cf. Stab and Gurevych (2017) and Persing and Ng (2016). Summarizing, we distinguish our framings in terms of modularity and in terms of their constraints. Modularity: Our dependency parsing framing and LSTM-ER are more modular than STagT because they de-couple relation information from entity information. However, (part of) 3By ‘de-coupling’, we mean that both tasks are treated separately rather than merging entity and relation information in the same tag label (output space). Still, a joint model like that of Miwa and Bansal (2016) de-couples the two tasks in such a way that many model parameters are shared across the tasks, similarly as in MTL. 15 this modularity can be regained by using STagT in an MTL setting. Moreover, since entity and relation information are considerably different, such a de-coupling may be advantageous. Constraints: LSTM-ER can, in principle, model any kind of— even many-to-many—relationships between detected entities. Thus, it is not guaranteed to produce trees, as we observe in AM datasets. STagT also does not need to produce trees, but it more severely restricts search space than does LSTMER: each token/component can only relate to one (and not several) other tokens/components. The same constraint is enforced by the dependency parsing framing. All of the tagging modelings, including LSTM-ER, are local models whereas our parsing framing is a global model: it globally enforces a tree structure on the token-level. Further remarks: (1) part of the TreeLSTM modeling inherent to LSTM-ER is ineffective for our data because this modeling exploits dependency tree structures on the sentence level, while relationships between components are almost never on the sentence level. In our data, roughly 92% of all relationships are between components that appear in different sentences. Secondly, (2) that a model enforces a constraint does not necessarily mean that it is more suitable for a respective task. It has frequently been observed that models tend to produce output consistent with constraints in their training data in such situations (Zhang et al., 2017; H´ector and Plank, 2017); thus, they have learned the constraints. 5 Experiments This section presents and discusses the empirical results for the AM framings outlined in §4. We relegate issues of pre-trained word embeddings, hyperparameter optimization and further practical issues to the supplementary material. Links to software used as well as some additional error analysis can also be found there. Evaluation Metric We adopt the evaluation metric suggested in Persing and Ng (2016). This computes true positives TP, false positives FP, and false negatives FN, and from these calculates component and relation F1 scores as F1 = 2TP 2TP+FP+FN. For space reasons, we refer to Persing and Ng (2016) for specifics, but to illustrate, for components, true positives are defined as the set of components in the gold standard for which there exists a predicted component with the same type that ‘matches’. Persing and Ng (2016) define a notion of what we may term ‘level α matching’: for example, at the 100% level (exact match) predicted and gold components must have exactly the same spans, whereas at the 50% level they must only share at least 50% of their tokens (approximate match). We refer to these scores as C-F1 (100%) and C-F1 (50%), respectively. For relations, an analogous F1 score is determined, which we denote by R-F1 (100%) and R-F1 (50%). We note that R-F1 scores depend on C-F1 scores because correct relations must have correct arguments. We also define a ‘global’ F1 score, which is the F1score of C-F1 and R-F1. Most of our results are shown in Table 2. (a) Dependency Parsing We show results for the two feature-based parsers MST (McDonald et al., 2005), Mate (Bohnet and Nivre, 2012) as well as the neural parsers by Dyer et al. (2015) (LSTM-Parser) and Kiperwasser and Goldberg (2016) (Kiperwasser). We train and test all parsers on the paragraph level, because training them on essay level was typically too memory-exhaustive. MST mostly labels only non-argumentative units correctly, except for recognizing individual major claims, but never finds their exact spans (e.g., “tourism can create negative impacts on” while the gold major claim is “international tourism can create negative impacts on the destination countries”). Mate is slightly better and in particular recognizes several major claims correctly. Kiperwasser performs decently on the approximate match level, but not on exact level. Upon inspection, we find that the parser often predicts ‘too large’ component spans, e.g., by including following punctuation. The best parser by far is the LSTM-Parser. It is over 100% better than Kiperwasser on exact spans and still several percentage points on approximate spans. How does performance change when we switch to the essay level? For the LSTM-Parser, the best performance on essay level is 32.84%/47.44% CF1 (100%/50% level), and 9.11%/14.45% on RF1, but performance result varied drastically between different parametrizations. Thus, the performance drop between paragraph and essay level is in any case immense. Since the employed features of modern featurebased parsers are rather general—such as distance between words or word identities—we had expected them to perform much better. The mini16 Paragraph level Essay level Acc. C-F1 R-F1 F1 Acc. C-F1 R-F1 F1 100% 50% 100% 50% 100% 50% 100% 50% 100% 50% 100% 50% MST-Parser 31.23 0 6.90 0 1.29 0 2.17 Mate 22.71 2.72 12.34 2.03 4.59 2.32 6.69 Kiperwasser 52.80 26.65 61.57 15.57 34.25 19.65 44.01 LSTM-Parser 55.68 58.86 68.20 35.63 40.87 44.38 51.11 STagBLCC 59.34 66.69 74.08 39.83 44.02 49.87 55.22 60.46 63.23 69.49 34.82 39.68 44.90 50.51 LSTM-ER 61.67 70.83 77.19 45.52 50.05 55.42 60.72 54.17 66.21 73.02 29.56 32.72 40.87 45.19 ILP 60.32 62.61 73.35 34.74 44.29 44.68 55.23 Table 2: Performance of dependency parsers, STagBLCC, LSTM-ER and ILP (from top to bottom). The ILP model operates on both levels. Best scores in each column in bold (signific. at p < 0.01; Two-sided Wilcoxon signed rank test, pairing F1 scores for documents). We also report token level accuracy. mal feature set employed by Kiperwasser is apparently not sufficient for accurate AM but still a lot more powerful than the hand-crafted feature approaches. We hypothesize that the LSTM-Parser’s good performance, relative to the other parsers, is due to its encoding of the whole stack history— rather than just the top elements on the stack as in Kiperwasser— which makes it aware of much larger ‘contexts’. While the drop in performance from paragraph to essay level is expected, the LSTM-Parser’s deterioration is much more severe than the other models’ surveyed below. We believe that this is due to a mixture of the following: (1) ‘capacity’, i.e., model complexity, of the parsers— that is, risk of overfitting; and (2) few, but very long sequences on essay level—that is, little training data (trees), paired with a huge search space on each train/test instance, namely, the number of possible trees on n tokens. See also our discussions below, particularly, our stability analysis. (b) Sequence Tagging For these experiments, we use the BLCC tagger from Ma and Hovy (2016) and refer to the resulting system as STagBLCC. Again, we observe that paragraph level is considerably easier than essay level; e.g., for relations, there is ∼5% points increase from essay to paragraph level. Overall, STagBLCC is ∼13% better than the best parser for C-F1 and ∼11% better for R-F1 on the paragraph level. Our explanation is that taggers are simpler local models, and thus need less training data and are less prone to overfitting. Moreover, they can much better deal with the long sequences because they are largely invariant to length: e.g., it does in principle not matter, from a parameter estimation perspective, whether we train our taggers on two sequences of lengths n and m, respectively, or on one long sequence of length n + m. (c) MTL As indicated, we use the MTL tagging framework from Søgaard and Goldberg (2016) for multi-task experiments. The underlying tagging framework is weaker than that of BLCC: there is no CNN which can take subword information into account and there are no dependencies between output labels: each tagging prediction is made independently of the other predictions. We refer to this system as STagBL. Accordingly, as Table 3 shows for the essay level (paragraph level omitted for space reasons), results are generally weaker: For exact match, C-F1 values are about ∼10% points below those of STagBLCC, while approximate match performances are much closer. Hence, the independence assumptions of the BL tagger apparently lead to more ‘local’ errors such as exact argument span identification (cf. error analysis). An analogous trend holds for argument relations. Additional Tasks: We find that when we train STagBL with only its main task—with label set Y as in Eq. (1)—the overall result is worst. In contrast, when we include the ‘natural subtasks’ “C” (label set YC consists of the projection on the coordinates (b, t) in Y) and/or “R” (label set YR consists of the projection on the coordinates (d, s)), performance increases typically by a few percentage points. This indicates that complex sequence tagging may benefit when we train a “sublabeler” in the same neural architecture, a finding that may be particularly relevant for morphological POS tagging (M¨uller et al., 2013). Unlike Søgaard and Goldberg (2016), we do not find that the optimal architecture is the one in which “lower” tasks (such as C or R) feed from lower layers. In fact, in one of the best parametrizations 17 the C task and the full task feed from the same layer in the deep BiLSTM. Moreover, we find that the C task is consistently more helpful as an auxiliary task than the R task. C-F1 R-F1 F1 100% 50% 100% 50% 100% 50% Y-3 49.59 65.37 26.28 37.00 34.35 47.25 Y-3:YC-1 54.71 66.84 28.44 37.35 37.40 47.92 Y-3:YR-1 51.32 66.49 26.92 37.18 35.31 47.69 Y-3:YC-3 54.58 67.66 30.22 40.30 38.90 50.51 Y-3:YR-3 53.31 66.71 26.65 35.86 35.53 46.64 Y-3:YC-1:YR-2 52.95 67.84 27.90 39.71 36.54 50.09 Y-3:YC-3:YR-3 54.55 67.60 28.30 38.26 37.26 48.86 Table 3: Performance of MTL sequence tagging approaches, essay level. Tasks separated by “:”. Layers from which tasks feed are indicated by respective numbers. On essay level, (d) LSTM-ER performs very well on component identification (+5% C-F1 compared to STagBLCC), but rather poor on relation identification (-18% R-F1). Hence, its overall F1 on essay level is considerably below that of STagBLCC. In contrast, LSTM-ER trained and tested on paragraph level substantially outperforms all other systems discussed, both for component as well as for relation identification. We think that its generally excellent performance on components is due to LSTM-ER’s de-coupling of component and relation tasks. Our findings indicate that a similar result can be achieved for STagT via MTL when components and relations are included as auxiliary tasks, cf. Table 3. For example, the improvement of LSTM-ER over STagBLCC, for C-F1, roughly matches the increase for STagBL when including components and relations separately (Y-3:YC-3:YR-3) over not including them as auxiliary tasks (Y-3). Lastly, the better performance of LSTM-ER over STagBLCC for relations on paragraph level appears to be a consequence of its better performance on components. E.g., when both arguments are correctly predicted, STagBLCC has even higher chance of getting their relation correct than LSTM-ER (95.34% vs. 94.17%). Why does LSTM-ER degrade so much on essay level for R-F1? As said, text sequences are much longer on essay level than on paragraph level— hence, there are on average many more entities on essay level. Thus, there are also many more possible relations between all entities discovered in a text—namely, there are O(2m2) possible relations between m discovered components. Due to its 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 2 4 6 8 10 prob. correct |d| LSTM-ER STagBLCC Figure 4: Probability of correct relation identification given true distance is |d|. generality, LSTM-ER considers all these relations as plausible, while STagT does not (for any of choice of T): e.g., our coding explicitly constrains each premise to link to exactly one other component, rather than to 0, . . . , m possible components, as LSTM-ER allows. In addition, our explicit coding of distance values d biases the learner T to reflect the distribution of distance values found in real essays—namely, that related components are typically close in terms of the number of components between them. In contrast, LSTM-ER only mildly prefers short-range dependencies over long-range dependencies, cf. Figure 4. The (e) ILP has access to both paragraph and essay level information and thus has always more information than all neural systems compared to. Thus, it also knows in which paragraph in an essay it is. This is useful particularly for major claims, which always occur in first or last paragraphs in our data. Still, its performance is equal to or lower than that of LSTM-ER and STagBLCC when both are evaluated on paragraph level. Stability Analysis Table 4 shows averages and standard deviations of two selected models, namely, the STagBLCC tagging framework as well as the LSTM-Parser over several different runs (different random initializations as well as different hyperparameters as discussed in the supplementary material). These results detail that the taggers have lower standard deviations than the parsers. The difference is particularly striking on the essay level where the parsers often completely fail to learn, that is, their performance scores are close to 0%. As discussed above, we attribute this to the parsers’ increased model capacity relative to the taggers, which makes them more prone to overfitting. Data scarcity is another very likely source of error in this context, as the parsers only observe 322 (though very rich) trees 18 in the training data, while the taggers are always roughly trained on 120K tokens. On paragraph level, they do observe more trees, namely, 1786. STagBLCC LSTM-Parser Essay 60.62±3.54 9.40±13.57 Paragraph 64.74±1.97 56.24±2.87 Table 4: C-F1 (100%) in % for the two indicated systems; essay vs. paragraph level. Note that the mean performances are lower than the majority performances over the runs given in Table 2. Error analysis A systematic source of errors for all systems is detecting exact argument spans (segmentation). For instance, the ILP system predicts the following premise: “As a practical epitome , students should be prepared to present in society after their graduation”, while the gold premise omits the preceding discourse marker, and hence reads: “students should be prepared to present in society after their graduation”. On the one hand, it has been observed that even humans have problems exactly identifying such entity boundaries (Persing and Ng, 2016; Yang and Cardie, 2013). On the other hand, our results in Table 2 indicate that the neural taggers BLCC and BLC (in the LSTMER model) are much better at such exact identification than either the ILP model or the neural parsers. While the parsers’ problems are most likely due to model complexity, we hypothesize that the ILP model’s increased error rates stem from a weaker underlying tagging model (featurebased CRF vs. BiLSTM) and/or its features.4 In fact, as Table 5 shows, the macro-F1 scores5 on only the component segmentation tasks (BIO labeling) are substantially higher for both LSTMER and STagBLCC than for the ILP model. Noteworthy, the two neural systems even outperform the human upper bound (HUB) in this context, reported as 88.6% in Stab and Gurevych (2017). 6 Conclusion We present the first study on neural end-to-end AM. We experimented with different framings, 4The BIO tagging task is independent and thus not affected by the ILP constraints in the model of Stab and Gurevych (2017). The same holds true for the model of Persing and Ng (2016). 5Denoted FscoreM in Sokolova and Lapalme (2009). STagBLCC LSTM-ER ILP HUB Essay 90.04 90.57 Paragraph 88.32 90.84 86.67 88.60 Table 5: F1 scores in % on BIO tagging task. such as encoding AM as a dependency parsing problem, as a sequence tagging problem with particular label set, as a multi-task sequence tagging problem, and as a problem with both sequential and tree structure information. We show that (1) neural computational AM is as good or (substantially) better than a competing feature-based ILP formulation, while eliminating the need for manual feature engineering and costly ILP constraint designing. (2) BiLSTM taggers perform very well for component identification, as demonstrated for our STagT frameworks, for T = BLCC and T = BL, as well as for LSTM-ER (BLC tagger). (3) (Naively) coupling component and relation identification is not optimal, but both tasks should be treated separately, but modeled jointly. (4) Relation identification is more difficult: when there are few entities in a text (“short documents”), a more general framework such as that provided in LSTM-ER performs reasonably well. When there are many entities (“long documents”), a more restrained modeling is preferable. These are also our policy recommendations. Our work yields new state-of-the-art results in end-to-end AM on the PE dataset from Stab and Gurevych (2017). Another possible framing, not considered here, is to frame AM as an encoder-decoder problem (Bahdanau et al., 2015; Vinyals et al., 2015). This is an even more general modeling than LSTM-ER. Its suitability for the end-to-end learning task is scope for future work, but its adequacy for component classification and relation identification has been investigated in Potash et al. (2016). Acknowledgments We thank Lucie Flekova, Judith Eckle-Kohler, Nils Reimers, and Christian Stab for valuable feedback and discussions. We also thank the anonymous reviewers for their suggestions. The second author was supported by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 01UG1416B (CEDIFOR). 19 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, Curran Associates, Inc., pages 1171–1179. Or Biran and Owen Rambow. 2011. Identifying justifications in written dialogs. In Fifth IEEE International Conference on Semantic Computing (ICSC). pages 162–168. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, Stroudsburg, PA, USA, EMNLP-CoNLL ’12, pages 1455–1465. Rich Caruana. 1997. Multitask learning. Mach. Learn. 28(1):41–75. https://doi.org/10.1023/A:1007379606734. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Empirical Methods in Natural Language Processing (EMNLP). Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning. ACM, New York, NY, USA, ICML ’08, pages 160–167. https://doi.org/10.1145/1390156.1390177. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 334–343. Eirini Florou, Stasinos Konstantopoulos, Antonis Koukourikos, and Pythagoras Karampiperis. 2013. Argument extraction for supporting public policy formulation. In Proceedings of the 7th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities. Association for Computational Linguistics, Sofia, Bulgaria, pages 49–54. Ivan Habernal and Iryna Gurevych. 2015. Exploiting debate portals for semi-supervised argumentation mining in user-generated web discourse. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal, pages 2127–2137. Ivan Habernal and Iryna Gurevych. 2016. Argumentation Mining in User-Generated Web Discourse. Computational Linguistics 43(1). In press. Preprint: http://arxiv.org/abs/1601.02403. Martnez Alonso H´ector and Barbara Plank. 2017. When is multitask learning effective? semantic sequence prediction under varying data conditions. In Proceedings of EACL 2017 (long paper). Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735– 1780. https://doi.org/10.1162/neco.1997.9.8.1735. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR abs/1508.01991. http://arxiv.org/abs/1508.01991. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics 4:313–327. Christian Kirschner, Judith Eckle-Kohler, and Iryna Gurevych. 2015. Linking the thoughts: Analysis of argumentation structures in scientific publications. In Proceedings of the 2nd Workshop on Argumentation Mining held in conjunction with the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL HLT 2015). pages 1– 11. Kevin Knight, Daniel Marcu, and Jill Burstein. 2003. Finding the write stuff: Automatic identification of discourse structure in student essays. IEEE Intelligent Systems 18:32–39. Ran Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, and Noam Slonim. 2014. Context dependent claim detection. In COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ireland. pages 1489– 1500. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1064–1074. http://www.aclweb.org/anthology/P16-1101. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of the Conference on Human Language Technology 20 and Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, HLT ’05, pages 523–530. https://doi.org/10.3115/1220575.1220641. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1105– 1116. http://www.aclweb.org/anthology/P16-1105. Marie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection of arguments in legal texts. In Proceedings of the 11th International Conference on Artificial Intelligence and Law. ACM, New York, NY, USA, ICAIL ’07, pages 225–230. https://doi.org/10.1145/1276318.1276362. Philippe Muller, Stergos D. Afantenos, Pascal Denis, and Nicholas Asher. 2012. Constrained decoding for text-level discourse parsing. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, 8-15 December 2012, Mumbai, India. pages 1883–1900. Thomas M¨uller, Helmut Schmid, and Hinrich Sch¨utze. 2013. Efficient higher-order CRFs for morphological tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 322–332. http://www.aclweb.org/anthology/D13-1032. Raquel Mochales Palau and Marie-Francine Moens. 2009. Argumentation mining: The detection, classification and structure of arguments in text. In Proceedings of the 12th International Conference on Artificial Intelligence and Law. ACM, New York, NY, USA, ICAIL ’09, pages 98–107. https://doi.org/10.1145/1568234.1568246. Andreas Peldszus and Manfred Stede. 2015. Joint prediction in mst-style discourse parsing for argumentation mining. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 938–948. http://aclweb.org/anthology/D15-1110. Andreas Peldszus and Manfred Stede. 2016. An annotated corpus of argumentative microtexts. In Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation. Lisabon, pages 801–815. Nanyun Peng and Mark Dredze. 2016. Multitask multi-domain representation learning for sequence tagging. CoRR abs/1608.02689. http://arxiv.org/abs/1608.02689. Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 543–552. Isaac Persing and Vincent Ng. 2016. End-to-end argumentation mining in student essays. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 1384–1394. http://www.aclweb.org/anthology/N16-1164. Peter Potash, Alexey Romanov, and Anna Rumshisky. 2016. Here’s my point: Argumentation Mining with Pointer Networks. Arxiv preprint https://arxiv.org/abs/1612.08994 . Chris Reed, Raquel Mochales-Palau, Glenn Rowe, and Marie-Francine Moens. 2008. Language resources for studying argument. In Proceedings of the Sixth International Conference on Language Resources and Evaluation. Marrakech, Morocco, LREC ’08, pages 2613–2618. Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M. Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence - an automatic method for context dependent evidence detection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015. pages 440–450. N. Rooney, H. Wang, and F. Browne. 2012. Applying kernel methods to argumentation mining. In TwentyFifth International FLAIRS Conference. Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671 . Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 231–235. http://anthology.aclweb.org/P16-2038. Marina Sokolova and Guy Lapalme. 2009. A systematic analysis of performance measures for classification tasks. Information Processing & Management 45(4):427–437. https://doi.org/10.1016/j.ipm.2009.03.002. Swapna Somasundaran, Brian Riordan, Binod Gyawali, and Su-Youn Yoon. 2016. Evaluating argumentative and narrative essays using graphs. 21 In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan. pages 1568–1578. Christian Stab and Iryna Gurevych. 2017. Parsing argumentation structures in persuasive essays. Computational Linguistics (in press), preprint: http://arxiv.org/abs/1604.07370). Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, Curran Associates, Inc., pages 2692–2700. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 1640–1649. http://www.aclweb.org/anthology/P13-1161. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2016. Multi-task cross-lingual sequence tagging from scratch. CoRR abs/1603.06270. Fan Zhang and Diane J. Litman. 2016. Using context to predict the purpose of argumentative writing revisions. In The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1424–1430. Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of EACL 2017 (long papers). Association for Computational Linguistics. 22
2017
2
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 209–220 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1020 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 209–220 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1020 Coarse-to-Fine Question Answering for Long Documents Eunsol Choi† University of Washington [email protected] Daniel Hewlett, Jakob Uszkoreit Google {dhewlett,usz}@google.com Illia Polosukhin† XIX.ai [email protected] Alexandre Lacoste† Element AI [email protected] Jonathan Berant† Tel Aviv University [email protected] Abstract We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-ofthe-art models. While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences. Inspired by how people first skim the document, identify relevant parts, and carefully read these parts to produce an answer, we combine a coarse, fast model for selecting relevant sentences and a more expensive RNN for producing the answer from those sentences. We treat sentence selection as a latent variable trained jointly from the answer only using reinforcement learning. Experiments demonstrate the state of the art performance on a challenging subset of the WIKIREADING dataset (Hewlett et al., 2016) and on a new dataset, while speeding up the model by 3.5x-6.7x. 1 Introduction Reading a document and answering questions about its content are among the hallmarks of natural language understanding. Recently, interest in question answering (QA) from unstructured documents has increased along with the availability of large scale datasets for reading comprehension (Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016; Onishi et al., 2016; Nguyen et al., 2016; Trischler et al., 2016a). Current state-of-the-art approaches for QA over documents are based on recurrent neural networks †Work done while the authors were at Google. Query (x) Document (d) Answer (y) Sentence Selection (Latent) Answer Generation (RNN) Document Summary ( ˆd) Figure 1: Hierarchical question answering: the model first selects relevant sentences that produce a document summary ( ˆd) for the given query (x), and then generates an answer (y) based on the summary ( ˆd) and the query x. (RNNs) that encode the document and the question to determine the answer (Hermann et al., 2015; Chen et al., 2016; Kumar et al., 2016; Kadlec et al., 2016; Xiong et al., 2016). While such models have access to all the relevant information, they are slow because the model needs to be run sequentially over possibly thousands of tokens, and the computation is not parallelizable. In fact, such models usually truncate the documents and consider only a limited number of tokens (Miller et al., 2016; Hewlett et al., 2016). Inspired by studies on how people answer questions by first skimming the document, identifying relevant parts, and carefully reading these parts to produce an answer (Masson, 1983), we propose a coarse-to-fine model for question answering. Our model takes a hierarchical approach (see Figure 1), where first a fast model is used to select a few sentences from the document that are relevant for answering the question (Yu et al., 2014; Yang et al., 2016a). Then, a slow RNN is employed to produce the final answer from the selected sentences. The RNN is run over a fixed number of tokens, regardless of the length of the document. Empirically, our model encodes the 209 d: s1: The 2011 Joplin tornado was a catastrophic EF5rated multiple-vortex tornado that struck Joplin, Missouri . . . s4: It was the third tornado to strike Joplin since May 1971. s5: Overall, the tornado killed 158 people . . ., injured some 1,150 others, and caused damages . . . x: how many people died in joplin mo tornado y: 158 people Figure 2: A training example containing a document d, a question x and an answer y in the WIKISUGGEST dataset. In this example, the sentence s5 is necessary to answer the question. text up to 6.7 times faster than the base model, which reads the first few paragraphs, while having access to four times more tokens. A defining characteristic of our setup is that an answer does not necessarily appear verbatim in the input (the genre of a movie can be determined even if not mentioned explicitly). Furthermore, the answer often appears multiple times in the document in spurious contexts (the year ‘2012’ can appear many times while only once in relation to the question). Thus, we treat sentence selection as a latent variable that is trained jointly with the answer generation model from the answer only using reinforcement learning. Treating sentence selection as a latent variable has been explored in classification (Yessenalina et al., 2010; Lei et al., 2016), however, to our knowledge, has not been applied for question answering. We find that jointly training sentence selection and answer generation is especially helpful when locating the sentence containing the answer is hard. We evaluate our model on the WIKIREADING dataset (Hewlett et al., 2016), focusing on examples where the document is long and sentence selection is challenging, and on a new dataset called WIKISUGGEST that contains more natural questions gathered from a search engine. To conclude, we present a modular framework and learning procedure for QA over long text. It captures a limited form of document structure such as sentence boundaries and deals with long documents or potentially multiple documents. Experiments show improved performance compared to the state of the art on the subset of WIKIREADING, comparable performance on other datasets, and a 3.5x-6.7x speed up in document encoding, while allowing access to much longer documents. % answer avg # of % match string exists ans. match first sent WIKIREADING 47.1 1.22 75.1 WR-LONG 50.4 2.18 31.3 WIKISUGGEST 100 13.95 33.6 Table 1: Statistics on string matches of the answer y∗in the document. The third column only considers examples with answer match. Often the answer string is missing or appears many times while it is relevant to query only once. 2 Problem Setting Given a training set of question-document-answer triples {x(i), d(i), y(i)}N i=1, our goal is to learn a model that produces an answer y for a questiondocument pair (x, d). A document d is a list of sentences s1, s2, . . . , s|d|, and we assume that the answer can be produced from a small latent subset of the sentences. Figure 2 illustrates a training example in which sentence s5 is in this subset. 3 Data We evaluate on WIKIREADING, WIKIREADING LONG, and a new dataset, WIKISUGGEST. WIKIREADING (Hewlett et al., 2016) is a QA dataset automatically generated from Wikipedia and Wikidata: given a Wikipedia page about an entity and a Wikidata property, such as PROFESSION, or GENDER, the goal is to infer the target value based on the document. Unlike other recently released large-scale datasets (Rajpurkar et al., 2016; Trischler et al., 2016a), WIKIREADING does not annotate answer spans, making sentence selection more challenging. Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can usually be inferred from the first few sentences. Thus, the data is not ideal for testing a sentence selection model compared to a model that uses the first few sentences. Table 1 quantifies this intuition: We consider sentences containing the answer y∗as a proxy for sentences that should be selected, and report how often y∗appears in the document. Additionally, we report how frequently this proxy oracle sentence is the first sentence. We observe that in WIKIREADING, the answer appears verbatim in 47.1% of the examples, and in 75% of them the match is in the first sentence. Thus, the importance of modeling sentence selection is limited. To remedy that, we filter WIKIREADING and ensure a more even distribution of answers throughout the document. We prune short docu210 # of uniq. # of # of words # of tokens queries examples / query / doc. WIKIREADING 867 18.58M 2.35 489.2 WR-LONG 239 1.97M 2.14 1200.7 WIKISUGGEST 3.47M 3.47M 5.03 5962.2 Table 2: Data statistics. ments with less than 10 sentences, and only consider Wikidata properties for which Hewlett et al. (2016)’s best model obtains an accuracy of less than 60%. This prunes out properties such as GENDER, GIVEN NAME, and INSTANCE OF.1 The resulting WIKIREADING LONG dataset contains 1.97M examples, where the answer appears in 50.4% of the examples, and appears in the first sentence only 31% of the time. On average, the documents in WIKIREADING LONG contain 1.2k tokens, more tokens than those of SQuAD (average 122 tokens) or CNN (average 763 tokens) datasets (see Table 2). Table 1 shows that the exact answer string is often missing from the document in WIKIREADING. This is since Wikidata statements include properties such as NATIONALITY, which are not explicitly mentioned, but can still be inferred. A drawback of this dataset is that the queries, Wikidata properties, are not natural language questions and are limited to 858 properties. To model more realistic language queries, we collect the WIKISUGGEST dataset as follows. We use the Google Suggest API to harvest natural language questions and submit them to Google Search. Whenever Google Search returns a box with a short answer from Wikipedia (Figure 3), we create an example from the question, answer, and the Wikipedia document. If the answer string is missing from the document this often implies a spurious question-answer pair, such as (‘what time is half time in rugby’, ‘80 minutes, 40 minutes’). Thus, we pruned question-answer pairs without the exact answer string. We examined fifty examples after filtering and found that 54% were well-formed question-answer pairs where we can ground answers in the document, 20% contained answers without textual evidence in the document (the answer string exists in an irreleveant context), and 26% contain incorrect QA pairs such as the last two examples in Figure 3. The data collection was performed in May 2016. 1These three relations alone account for 33% of the data. WIKISUGGEST Query Answer what year did virgina became a state 1788 general manager of smackdown Theodore Long minnesota viking colors purple coco martin latest movies maybe this time longest railway station in asia Gorakhpur son from modern family Claire Dunphy north dakota main religion Christian lands end’ brand Lands’ End wdsu radio station WCBE Figure 3: Example queries and answers of WIKISUGGEST. 4 Model Our model has two parts (Figure 1): a fast sentence selection model (Section 4.1) that defines a distribution p(s | x, d) over sentences given the input question (x) and the document (d), and a more costly answer generation model (Section 4.3) that generates an answer y given the question and a document summary, ˆd (Section 4.2), that focuses on the relevant parts of the document. 4.1 Sentence Selection Model Following recent work on sentence selection (Yu et al., 2014; Yang et al., 2016b), we build a feed-forward network to define a distribution over the sentences s1, s2, . . . , s|d|. We consider three simple sentence representations: a bag-of-words (BoW) model, a chunking model, and a (parallelizable) convolutional model. These models are efficient at dealing with long documents, but do not fully capture the sequential nature of text. BoW Model Given a sentence s, we denote by BoW(s) the bag-of-words representation that averages the embeddings of the tokens in s. To define a distribution over the document sentences, we employ a standard attention model (e.g., (Hermann et al., 2015)), where the BoW representation of the query is concatenated to the BoW representation of each sentence sl, and then passed through a single layer feed-forward network: hl = [BoW(x); BoW(sl)] vl = v⊤ReLU(Whl), p(s = sl | x, d) = softmax(vl), 211 where [; ] indicates row-wise concatenation, and the matrix W, the vector v, and the word embeddings are learned parameters. Chunked BoW Model To get more fine-grained granularity, we split sentences into fixed-size smaller chunks (seven tokens per chunk) and score each chunk separately (Miller et al., 2016). This is beneficial if questions are answered with subsentential units, by allowing to learn attention over different chunks. We split a sentence sl into a fixed number of chunks (cl,1, cl,2 . . . , cl,J), generate a BoW representation for each chunk, and score it exactly as in the BoW model. We obtain a distribution over chunks, and compute sentence probabilities by marginalizing over chunks from the same sentence. Let p(c = cl,j | x, d) be the distribution over chunks from all sentences, then: p(s = sl | x, d) = J X j=1 p(c = cl,j | x, d), with the same parameters as in the BoW model. Convolutional Neural Network Model While our sentence selection model is designed to be fast, we explore a convolutional neural network (CNN) that can compose the meaning of nearby words. A CNN is still efficient, since all filters can be computed in parallel. Following previous work (Kim, 2014; Kalchbrenner et al., 2014), we concatenate the embeddings of tokens in the query x and the sentence sl, and run a convolutional layer with F filters and width w over the concatenated embeddings. This results in F features for every span of length w, and we employ max-over-time-pooling (Collobert et al., 2011) to get a final representation hl ∈RF . We then compute p(s = sl | x, d) by passing hl through a single layer feed-forward network as in the BoW model. 4.2 Document Summary After computing attention over sentences, we create a summary that focuses on the document parts related to the question using deterministic soft attention or stochastic hard attention. Hard attention is more flexible, as it can focus on multiple sentences, while soft attention is easier to optimize and retains information from multiple sentences. Hard Attention We sample a sentence ˆs ∼ p(s | x, d) and fix the document summary ˆd = ˆs to be that sentence during training. At test time, we choose the most probable sentence. To extend the document summary to contain more information, we can sample without replacement K sentences from the document and define the summary to be the concatenation of the sampled sentences ˆd = [ˆs1; ˆs2; . . . ; ˆsK]. Soft Attention In the soft attention model (Bahdanau et al., 2015) we compute a weighted average of the tokens in the sentences according to p(s | x, d). More explicitly, let ˆdm be the mth token of the document summary. Then, by fixing the length of every sentence to M tokens,2 the blended tokens are computed as follows: ˆdm = |d| X l=1 p(s = sl | x, d) · sl,m, where sl,m is the mth word in the lth sentence (m ∈{1, . . . , M}). As the answer generation models (Section 4.3) take a sequence of vectors as input, we average the tokens at the word level. This gives the hard attention an advantage since it samples a “real” sentence without mixing words from different sentences. Conversely, soft attention is trained more easily, and has the capacity to learn a low-entropy distribution that is similar to hard attention. 4.3 Answer Generation Model State-of-the-art question answering models use RNN models to encode the document and question and selects the answer. We focus on a hierarchical model with fast sentence selection, and do not subscribe to a particular answer generation architecture. Here we implemented the state-of-the-art wordlevel sequence-to-sequence model with placeholders, described by Hewlett et al. (2016). This models can produce answers that does not appear in the sentence verbatim. This model takes the query tokens, and the document (or document summary) tokens as input and encodes them with a Gated Recurrent Unit (GRU; Cho et al. (2014)). Then, the answer is decoded with another GRU model, defining a distribution over answers p(y | x, ˆd). In this work, we modified the original RNN: the word embeddings for the RNN decoder input, output and original word embeddings are shared. 2Long sentences are truncated and short ones are padded. 212 5 Learning We consider three approaches for learning the model parameters (denoted by θ): (1) We present a pipeline model, where we use distant supervision to train a sentence selection model independently from an answer generation model. (2) The hard attention model is optimized with REINFORCE (Williams, 1992) algorithm. (3) The soft attention model is fully differentiable and is optimized end-to-end with backpropagation. Distant Supervision While we do not have an explicit supervision for sentence selection, we can define a simple heuristic for labeling sentences. We define the gold sentence to be the first sentence that has a full match of the answer string, or the first sentence in the document if no full match exists. By labeling gold sentences, we can train sentence selection and answer generation independently with standard supervised learning, maximizing the log-likelihood of the gold sentence and answer, given the document and query. Let y∗and s∗be the target answer and sentence , where s∗ also serves as the document summary. The objective is to maximize: J(θ) = log pθ(y∗, s∗| x, d) = log pθ(s∗| x, d) + log pθ(y∗| s∗, x). Since at test time we do not have access to the target sentence s∗needed for answer generation, we replace it by the model prediction arg maxsl∈d pθ(s = sl | d, x). Reinforcement Learning Because the target sentence is missing, we use reinforcement learning where our action is sentence selection, and our goal is to select sentences that lead to a high reward. We define the reward for selecting a sentence as the log probability of the correct answer given that sentence, that is, Rθ(sl) = log pθ(y = y∗| sl, x). Then the learning objective is to maximize the expected reward: J(θ) = X sl∈d pθ(s=sl | x, d) · Rθ(sl) = X sl∈d pθ(s=sl | x, d) · log pθ(y=y∗| sl, x). Following REINFORCE (Williams, 1992), we approximate the gradient of the objective with a sample, ˆs ∼pθ(s | x, d): ∇J(θ) ≈∇log pθ(y | ˆs, x) + log pθ(y | ˆs, x) · ∇log pθ(ˆs | x, d). Sampling K sentences is similar and omitted for brevity. Training with REINFORCE is known to be unstable due to the high variance induced by sampling. To reduce variance, we use curriculum learning, start training with distant supervision and gently transition to reinforcement learning, similar to DAGGER (Ross et al., 2011). Given an example, we define the probability of using the distant supervision objective at each step as re, where r is the decay rate and e is the index of the current training epoch.3 Soft Attention We train the soft attention model by maximizing the log likelihood of the correct answer y∗given the input question and document log pθ(y∗| d, x). Recall that the answer generation model takes as input the query x and document summary ˆd, and since ˆd is an average of sentences weighted by sentence selection, the objective is differentiable and is trained end-to-end. 6 Experiments Experimental Setup We used 70% of the data for training, 10% for development, and 20% for testing in all datasets. We used the first 35 sentences in each document as input to the hierarchical models, where each sentence has a maximum length of 35 tokens. Similar to Miller et al. (2016), we add the first five words in the document (typically the title) at the end of each sentence sequence for WIKISUGGEST. We add the sentence index as a one hot vector to the sentence representation. We coarsely tuned and fixed most hyperparameters for all models. The word embedding dimension is set to 256 for both sentence selection and answer generation models. We used the decay rate of 0.8 for curriculum learning. Hidden dimension is fixed at 128, batch size at 128, GRU state cell at 512, and vocabulary size at 100K. For CNN sentence selection model, we used 100 filters and set filter width as five. The initial learning rate and gradient clipping coefficients for each model are tuned on the development set. The ranges for learning rates were 0.00025, 0.0005, 0.001, 0.002, 0.004 and 0.5, 1.0 for gradient clipping coefficient. 3 We tuned r ∈[0.3, 1] on the development set. 213 Figure 4: Runtime for document encoding on an Intel Xeon CPU E5-1650 @3.20GHz on WIKIREADING at test time. The boxplot represents the throughput of BASE and each line plot shows the proposed models’ speed gain over BASE. Exact numbers are reported in the supplementary material. We halved the learning rate every 25k steps. We use the Adam (Kingma and Ba, 2015) optimizer and TensorFlow framework (Abadi et al., 2015). Evaluation Metrics Our main evaluation metric is answer accuracy, the proportion of questions answered correctly. For sentence selection, since we do not know which sentence contains the answer, we report approximate accuracy by matching sentences that contain the answer string (y∗). For the soft attention model, we treat the sentence with the highest probability as the predicted sentence. Models and Baselines The models PIPELINE, REINFORCE, and SOFTATTEND correspond to the learning objectives in Section 5. We compare these models against the following baselines: FIRST always selects the first sentence of the document. The answer appears in the first sentence in 33% and 15% of documents in WIKISUGGEST and WIKIREADING LONG. BASE is the re-implementation of the best model by Hewlett et al. (2016), consuming the first 300 tokens. We experimented with providing additional tokens to match the length of document available to hierarchical models, but this performed poorly.4 ORACLE selects the first sentence with the answer string if it exists, or otherwise the first sentence in the document. 4Our numbers on WIKIREADING outperform previously reported numbers due to modifications in implementation and better optimization. Dataset Learning Accuracy FIRST 26.7 BASE 40.1 ORACLE 43.9 WIKIREADING PIPELINE 36.8 LONG SOFTATTEND 38.3 REINFORCE (K=1) 40.1 REINFORCE (K=2) 42.2 FIRST 44.0 BASE 46.7 ORACLE 60.0 WIKI PIPELINE 45.3 SUGGEST SOFTATTEND 45.4 REINFORCE (K=1) 45.4 REINFORCE (K=2) 45.8 FIRST 71.0 HEWLETT ET AL. (2016) 71.8 BASE 75.6 ORACLE 74.6 WIKIREADING SOFTATTEND 71.6 PIPELINE 72.4 REINFORCE (K=1) 73.0 REINFORCE (K=2) 73.9 Table 3: Answer prediction accuracy on the test set. K is the number of sentences in the document summary. Answer Accuracy Results Table 3 summarizes answer accuracy on all datasets. We use BOW encoder for sentence selection as it is the fastest. The proposed hierarchical models match or exceed the performance of BASE, while reducing the number of RNN steps significantly, from 300 to 35 (or 70 for K=2), and allowing access to later parts of the document. Figure 4 reports the speed gain of our system. While throughput at training time can be improved by increasing the batch size, at test time real-life QA systems use batch size 1, where REINFORCE obtains a 3.5x-6.7x speedup (for K=2 or K=1). In all settings, REINFORCE was at least three times faster than the BASE model. All models outperform the FIRST baseline, and utilizing the proxy oracle sentence (ORACLE) improves performance on WIKISUGGEST and WIKIREADNG LONG. In WIKIREADING, where the proxy oracle sentence is often missing and documents are short, BASE outperforms ORACLE. Jointly learning answer generation and sentence selection, REINFORCE outperforms PIPELINE, which relies on a noisy supervision signal for sentence selection. The improvement is larger in WIKIREADING LONG, where the approximate supervision for sentence selection is missing for 51% of examples compared to 22% of examples in WIKISUGGEST.5 On WIKIREADING LONG, REINFORCE outper5The number is lower than in Table 1 because we cropped sentences and documents, as mentioned above. 214 Dataset Learning Model Accuracy CNN 70.7 PIPELINE BOW 69.2 CHUNKBOW 74.6 WIKI CNN 74.2 READING REINFORCE BOW 72.2 LONG CHUNKBOW 74.4 FIRST 31.3 SOFTATTEND (BoW) 70.1 CNN 62.3 PIPELINE BOW 67.5 CHUNKBOW 57.4 WIKI CNN 64.6 SUGGEST REINFORCE BOW 67.3 CHUNKBOW 59.3 FIRST 42.6 SOFTATTEND (BoW) 49.9 Table 4: Approximate sentence selection accuracy on the development set for all models. We use ORACLE to find a proxy gold sentence and report the proportion of times each model selects the proxy sentence. forms all other models (excluding ORACLE, which has access to gold labels at test time). In other datasets, BASE performs slightly better than the proposed models, at the cost of speed. In these datasets, the answers are concentrated in the first few sentences. BASE is advantageous in categorical questions (such as GENDER), gathering bits of evidence from the whole document, at the cost of speed. Encouragingly, our system almost reaches the performance of ORACLE in WIKIREADING, showing strong results in a limited token setting. Sampling an additional sentence into the document summary increased performance in all datasets, illustrating the flexibility of hard attention compared to soft attention. Additional sampling allows recovery from mistakes in WIKIREADING LONG, where sentence selection is challenging.6 Comparing hard attention to soft attention, we observe that REINFORCE performed better than SOFTATTEND. The attention distribution learned by the soft attention model was often less peaked, generating noisier summaries. Sentence Selection Results Table 4 reports sentence selection accuracy by showing the proportion of times models selects the proxy gold sentence when it is found by ORACLE. In WIKIREADING LONG, REINFORCE finds the approximate gold sentence in 74.4% of the examples where the the answer is in the document. In WIKISUGGEST performance is at 67.5%, mostly due to noise in the data. PIPELINE performs slightly better as it is directly trained towards our noisy eval6Sampling more help pipeline methods less. WR WIKI LONG SUGGEST No evidence in doc. 29 8 Error in answer generation 13 15 Noisy query & answer 0 24 Error in sentence selection 8 3 Table 5: Manual error analysis on 50 errors from the development set for REINFORCE (K=1). uation. However, not all sentences that contain the answer are useful to answer the question (first example in Table 6). REINFORCE learned to choose sentences that are likely to generate a correct answer rather than proxy gold sentences, improving the final answer accuracy. On WIKIREADING LONG, complex models (CNN and CHUNKBOW) outperform the simple BOW, while on WIKISUGGEST BOW performed best. Qualitative Analysis We categorized the primary reasons for the errors in Table 5 and present an example for each error type in Table 6. All examples are from REINFORCE with BOW sentence selection. The most frequent source of error for WIKIREADING LONG was lack of evidence in the document. While the dataset does not contain false answers, the document does not always provide supporting evidence (examples of properties without clues are ELEVATION ABOVE SEA LEVEL and SISTER). Interestingly, the answer string can still appear in the document as in the first example in Table 6: ‘Saint Petersburg’ appears in the document (4th sentence). Answer generation at times failed to generate the answer even when the correct sentence was selected. This was pronounced especially in long answers. For the automatically collected WIKISUGGEST dataset, noisy question-answer pairs were problematic, as discussed in Section 3. However, the models frequently guessed the spurious answer. We attribute higher proxy performance in sentence selection for WIKISUGGEST to noise. In manual analysis, sentence selection was harder in WIKIREADING LONG, explaining why sampling two sentences improved performance. In the first correct prediction (Table 6), the model generates the answer, even when it is not in the document. The second example shows when our model spots the relevant sentence without obvious clues. In the last example the model spots a sentence far from the head of the document. Figure 5 contains a visualization of the atten215 WIKIREADING LONG (WR LONG) Error Type No evidence in doc. (Query, Answer) (place of death, Saint Petersburg) System Output Crimean Peninsula 1 11.7 Alexandrovich Friedmann ( also spelled Friedman or [Fridman] , Russian : . . . 4 3.4 Friedmann was baptized . . . and lived much of his life in Saint Petersburg . 25 63.6 Friedmann died on September 16 , 1925 , at the age of 37 , from typhoid fever that he contracted while returning from a vacation in Crimean Peninsula . Error Type Error in sentence selection (Query, Answer) (position played on team speciality, power forward) System Output point guard 1 37.8 James Patrick Johnson (born February 20 , 1987) is an American professional basketball player for the Toronto Raptors of the National Basketball Association ( NBA ). 3 22.9 Johnson was the starting power forward for the Demon Deacons of Wake Forest University WIKISUGGEST (WS) Error Type Error in answer generation (Query, Answer) (david blaine’s mother, Patrice Maureen White) System Output Maureen 1 14.1 David Blaine (born David Blaine White; April 4, 1973) is an American magician, illusionist . . . 8 22.6 Blaine was born and raised in, Brooklyn , New York the son of Patrice Maureen White . . . Error Type Noisy query & answer (Query, Answer) (what are dried red grapes called, dry red wines) System Output Chardonnay 1 2.8 Burgundy wine ( French : Bourgogne or vin de Bourgogne ) is wine made in the . . . 2 90.8 The most famous wines produced here . . . are dry red wines made from Pinot noir grapes . . . Correctly Predicted Examples WR LONG (Query, Answer) (position held, member of the National Assembly of South Africa) 1 98.4 Anchen Margaretha Dreyer (born 27 March 1952) is a South African politician, a Member of Parliament for the opposition Democratic Alliance , and currently . . . (Query, Answer) (headquarters locations, Solihull) 1 13.8 LaSer UK is a provider of credit and loyalty programmes , operating in the UK and Republic . . . 4 82.3 The company ’s operations are in Solihull and Belfast where it employs 800 people . WS (Query, Answer) (avril lavigne husband, Chad Kroeger) 1 17.6 Avril Ramona Lavigne ([vrłl] [lvin] / ; French pronunciation : ¡200b¿ ( [avil] [lavi] ) ;. . . 23 68.4 Lavigne married Nickelback frontman , Chad Kroeger , in 2013 . Avril Ramona Lavigne was . . . Table 6: Example outputs from REINFORCE (K=1) with BOW sentence selection model. First column: sentence index (l). Second column: attention distribution pθ(sl|d, x). Last column: text sl. tion distribution over sentences, p(sl | d, x), for different learning procedures. The increased frequency of the answer string in WIKISUGGEST vs. WIKIREADING LONG is evident in the leftmost plot. SOFTATTEND and CHUNKBOW clearly distribute attention more evenly across the sentences compared to BOW and CNN. 7 Related Work There has been substantial interest in datasets for reading comprehension. MCTest (Richardson et al., 2013) is a smaller-scale datasets focusing on common sense reasoning; bAbi (Weston et al., 2015) is a synthetic dataset that captures various aspects of reasoning; and SQuAD (Rajpurkar et al., 2016; Wang et al., 2016; Xiong et al., 2016) and NewsQA (Trischler et al., 2016a) are QA datasets where the answer is a span in the document. Compared to Wikireading, some datasets covers shorter passages (average 122 words for SQuAD). Cloze-style question answering datasets (Hermann et al., 2015; Onishi et al., 2016; Hill et al., 2015) assess machine comprehension but do not form questions. The recently released MS MARCO dataset (Nguyen et al., 2016) consists of query logs, web documents and crowd-sourced answers. Answer sentence selection is studied with the TREC QA (Voorhees and Tice, 2000), WikiQA (Yang et al., 2016b) and SelQA (Jurczyk et al., 2016) datasets. Recently, neural networks models (Wang and Nyberg, 2015; Severyn and 216 Figure 5: For a random subset of documents in the development set, we visualized the learned attention over the sentences (p(sl|d, x)). Moschitti, 2015; dos Santos et al., 2016) achieved improvements on TREC datsaet. Sultan et al. (2016) optimized the answer sentence extraction and the answer extraction jointly, but with gold labels for both parts. Trischler et al. (2016b) proposed a model that shares the intuition of observing inputs at multiple granularities (sentence, word), but deals with multiple choice questions. Our model considers answer sentence selection as latent and generates answer strings instead of selecting text spans, and we found that WIKIREADING dataset suits our purposes best with some pruning, which still provided 1.97 million examples compared to 2K questions for TREC dataset. Hierarchical models which treats sentence selection as a latent variable have been applied text categorization (Yang et al., 2016b), extractive summarization (Cheng and Lapata, 2016), machine translation (Ba et al., 2014) and sentiment analysis (Yessenalina et al., 2010; Lei et al., 2016). To the best of our knowledge, we are the first to use the hierarchical nature of a document for QA. Finally, our work is related to the reinforcement learning literature. Hard and soft attention were examined in the context of caption generation (Xu et al., 2015). Curriculum learning was investigated in Sachan and Xing (2016), but they focused on the ordering of training examples while we combine supervision signals. Reinforcement learning recently gained popularity in tasks such as coreference resolution (Clark and Manning, 2016), information extraction (Narasimhan et al., 2016), semantic parsing (Andreas et al., 2016) and textual games (Narasimhan et al., 2015; He et al., 2016). 8 Conclusion We presented a coarse-to-fine framework for QA over long documents that quickly focuses on the relevant portions of a document. In future work we would like to deepen the use of structural clues and answer questions over multiple documents, using paragraph structure, titles, sections and more. Incorporating coreference resolution would be another important direction for future work. We argue that this is necessary for developing systems that can efficiently answer the information needs of users over large quantities of text. Acknowledgement We appreciate feedbacks from Google colleagues. We also thank Yejin Choi, Kenton Lee, Mike Lewis, Mark Yatskar and Luke Zettlemoyer for comments on the earlier draft of the paper. The last author is partially supported by Israel Science Foundation, grant 942/16. 217 References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. http://tensorflow.org/. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies . Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. 2014. Multiple object recognition with visual attention. The International Conference on Learning Representations . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. Proceedings of the International Conference on Learning Representations . Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Association for Computational Linguistics. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. Proceedings of the Annual Meeting of the Association for Computational Linguistics . Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Proceedings of the Conference of the Empirical Methods in Natural Language Processing . Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the Conference of the Empirical Methods in Natural Language Processing. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research (JMLR) 12:2493–2537. C´ıcero Nogueira dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. CoRR abs/1602.03609. Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf. 2016. Deep reinforcement learning with an unbounded action space. Proceedings of the Conference of the Association for Computational Linguistics . Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. http://arxiv.org/abs/1506.03340. Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over wikipedia. In Proceedings of the Conference of the Association for Computational Linguistics. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. The International Conference on Learning Representations . Tomasz Jurczyk, Michael Zhai, and Jinho D. Choi. 2016. SelQA: A New Benchmark for Selectionbased Question Answering. In Proceedings of the 28th International Conference on Tools with Artificial Intelligence. San Jose, CA, ICTAI’16. https://arxiv.org/abs/1606.08513. Rudolf Kadlec, Martin Schmid, Ondˇrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 908–918. http://www.aclweb.org/anthology/P16-1086. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. Proceedings of the Annual Meeting of the Association for Computational Linguistics . Yoon Kim. 2014. Convolutional neural networks for sentence classification. Proceedings of the Conference of the Empirical Methods in Natural Language Processing . Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. The International Conference on Learning Representations . Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In Proceedings of the International Conference on Machine Learning. 218 Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2016. Rationalizing neural predictions. Proceedings of the Conference of the Empirical Methods in Natural Language Processing . Michael EJ Masson. 1983. Conceptual processing of text during skimming and rapid sequential reading. Memory & Cognition 11(3):262–274. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. Proceedings of the Conference of the Empirical Methods in Natural Language Processing . Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for textbased games using deep reinforcement learning. Proceedings of the Conference of the Empirical Methods in Natural Language Processing . Karthik Narasimhan, Adam Yala, and Regina Barzilay. 2016. Improving information extraction by acquiring external evidence with reinforcement learning. Proceedings of the Conference of the Empirical Methods in Natural Language Processing . Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Workshop in Advances in Neural Information Processing Systems. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. Proceedings of Empirical Methods in Natural Language Processing . P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the Conference of the Empirical Methods in Natural Language Processing. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the Conference of the Empirical Methods in Natural Language Processing. St´ephane Ross, Geoffrey J Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In International Conference on Artificial Intelligence and Statistics. Mrinmaya Sachan and Eric P Xing. 2016. Easy questions first? a case study on curriculum learning for question answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, pages 373–382. Md. Arafat Sultan, Vittorio Castelli, and Radu Florian. 2016. A joint model for answer sentence ranking and answer extraction. Transactions of the Association for Computational Linguistics 4:113–125. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016a. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830 . Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, and Kaheer Suleman. 2016b. A parallel-hierarchical model for machine comprehension on sparse data. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics . Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 200–207. Di Wang and Eric Nyberg. 2015. A long short-term memory model for answer sentence selection in question answering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211 . Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 . Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning 8(3-4):229–256. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. Proceedings of the International Conference on Machine Learning . Yi Yang, Wen-tau Yih, and Christopher Meek. 2016a. Wikiqa: A challenge dataset for open-domain question answering. Proceedings of the Conference of the Empirical Methods in Natural Language Processing . 219 Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016b. Hierarchical attention networks for document classification. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Ainur Yessenalina, Yisong Yue, and Claire Cardie. 2010. Multi-level structured models for documentlevel sentiment classification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1046–1056. Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep Learning for Answer Sentence Selection. In NIPS Deep Learning Workshop. http://arxiv.org/abs/1412.1632. 220
2017
20
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 221–231 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1021 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 221–231 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1021 An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge Yanchao Hao1,2, Yuanzhe Zhang1,3, Kang Liu1, Shizhu He1, Zhanyi Liu3, Hua Wu3 and Jun Zhao1,2 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China 2 University of Chinese Academy of Sciences, Beijing, 100049, China 3 Baidu Inc., Beijing, 100085, China {yanchao.hao, yzzhang, kliu, shizhu.he, jzhao}@nlpr.ia.ac.cn {liuzhanyi, wu hua}@baidu.com Abstract With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural networkbased (NN-based) methods develop, NNbased KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the crossattention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach. 1 Introduction As the amount of the knowledge bases (KBs) grows, people are paying more attention to seeking effective methods for accessing these precious intellectual resources. There are several tailor-made languages designed for querying KBs, such as SPARQL (Prudhommeaux and Seaborne, 2008). However, to handle such query languages, users are required to not only be familiar with the particular language grammars, but also be aware of the architectures of the KBs. By contrast, knowledge base-based question answering (KB-QA) (Unger et al., 2014), which takes natural language as query language, is a more user-friendly solution, and has become a research focus in recent years. Given natural language questions, the goal of KB-QA is to automatically return answers from the KB. There are two mainstream research directions for this task: semantic parsing-based (SPbased) (Zettlemoyer and Collins, 2009, 2012; Kwiatkowski et al., 2013; Cai and Yates, 2013; Berant et al., 2013; Yih et al., 2015, 2016; Reddy et al., 2016) and information retrieval-based (IR-based) (Yao and Van Durme, 2014; Bordes et al., 2014a,b, 2015; Dong et al., 2015; Xu et al., 2016a,b) methods. SP-based methods usually focus on constructing a semantic parser that could convert natural language questions into structured expressions like logical forms. IR-based methods usually search answers from the KB based on the information conveyed in questions, where ranking techniques are often adopted to make correct selections from candidate answers. Recently, with the progress of deep learning, neural network-based (NN-based) methods have been introduced to the KB-QA task (Bordes et al., 2014b). Different from previous methods, NNbased methods represent both of the questions and the answers as semantic vectors. Then the complex process of KB-QA could be converted into a similarity matching process between an input question and its candidate answers in a semantic space. The candidates with the highest similarity score will be selected as the final answers. Because they are more adaptive, NN-based methods have attracted more and more attention, and this 221 paper also focuses on using end-to-end neural networks to answer questions over knowledge base. In NN-based methods, the crucial step is to compute the similarity score between a question and a candidate answer, where the key is to learn their representations. Previous methods put more emphasis on learning representation of the answer end. For example, Bordes et al. (2014a) consider the importance of the subgraph of the candidate answer. Dong et al. (2015) make use of the context and the type of the answer. However, the representation of the question end is oligotrophic. Existing approaches often represent a question into a single vector using simple bag-of-words (BOW) model (Bordes et al., 2014a,b), whereas the relatedness to the answer end is neglected. We argue that a question should be represented differently according to the different focuses of various answer aspects1. Take the question “Who is the president of France?” and one of its candidate answers “Francois Hollande” as an example. When dealing with the answer entity Francois Holland, “president” and “France” in the question is more focused, and the question representation should bias towards the two words. While facing the answer type /business/board member, “Who” should be the most prominent word. Meanwhile, some questions may value answer type more than other answer aspects. While in some other questions, answer relation may be the most important information we should consider, which is dynamic and flexible corresponding to different questions and answers. Obviously, this is an attention mechanism, which reveals the mutual influences between the representation of questions and the corresponding answer aspects. We believe that such kind of representation is more expressive. Dong et al. (2015) represents questions using three CNNs with different parameters when dealing with different answer aspects including answer path, answer context and answer type. The method is very enlightening and achieves the best performance on WebQeustions at that time among the end-to-end approaches. However, we argue that simply selecting three independent CNNs is mechanical and inflexible. Thus, we go one step further, and propose a crossattention based neural network to perform KB1An answer aspect could be the answer entity itself, the answer type, the answer context, etc. QA. The cross-attention model, which stands for the mutual attention between the question and the answer aspects, contains two parts: the answertowards-question attention part and the questiontowards-answer attention part. The former help learn flexible and adequate question representation, and the latter help adjust the question-answer weight, getting the final score. We illustrate in section 3.2 for more details. In this way, we formulate the cross-attention mechanism to model the question answering procedure. Note that our proposed model is an entire end-to-end approach which only depends on training data. Some integrated systems which use extra patterns and resources are not directly comparable to ours. Our target is to explore a better solution following the end-to-end KB-QA technical path. Moreover, we notice that the representations of the KB resources (entities and relations) are also limited in previous work. specifically, they are often learned barely on the QA training data, which results in two limitations. 1) The global information of the KB is deficient. For example, if question-answer pair (q, a) appears in the training data, and the global KB information implies us that a′ is similar to a2, denoted by (a ∼a′), then (q, a′) is more probable to be right. However, current QA training mechanism cannot guarantee (a ∼a′) could be learned. 2) The problem of out-ofvocabulary (OOV) stands out. Due to the limited coverage of the training data, the OOV problem is common while testing, and many answer entities in testing candidate set have never been seen before. The attention of these resources become the same because they shared the same OOV embedding, and this will do harm to the proposed attention model. To tackle these two problems, we additionally incorporates KB itself as training data for training embeddings besides original questionanswer pairs. In this way, the global structure of the whole knowledge could be captured, and the OOV problem could be alleviated naturally. In summary, the contributions are as follows. 1) We present a novel cross-attention based NN model tailored to KB-QA task, which considers the mutual influence between the representation of questions and the corresponding answer aspects. 2) We leverage the global KB information, aiming at represent the answers more precisely. It also al2The complete KB is able to offer this kind of information, e.g., a and a′ share massive context. 222 leviates the OOV problem, which is very helpful to the cross-attention model. 3) The experimental results on the open dataset WebQuestions demonstrate the effectiveness of the proposed approach. 2 Overview The goal of KB-QA task could be formulated as follows. Given a natural language question q, the system returns an entity set A as answers. The architecture of our proposed KB-QA system is shown in Figure 1, which illustrates the basic flow of our approach. First, we identify the topic entity of the question, and generate candidate answers from Freebase. Then, a cross-attention based neural network is employed to represent the question under the influence of the candidate answer aspects. Finally, the similarity score between the question and each corresponding candidate answer is calculated, and the candidates with highest score will be selected as the final answers3. Cross-Attention based Neural Network q: Who is the president of France? Topic entity France Candidate generation Candidate Answers Paris 𝑎1 French 𝑎2 Semi-presidential system ⋮ Answer Aspects answer entity:/m/05qtj answer relation: capital answer type: /location/city town answer context:/m/0276jx2, /m /0jd4j, /m/0f3vz, … ⋮ S(𝑞, 𝑎1) S(𝑞, 𝑎2 ) ⋮ ⋮ ⋯ A Ranking Freebase Figure 1: The overview of the proposed KB-QA system. We utilize Freebase (Bollacker et al., 2008) as our knowledge base. It has more than 3 billion facts, and is used as the supporting KB for many QA tasks. In Freebase, the facts are represented by subject-predicate-object triples (s, p, o). For clarity, we call each basic element a resource, which could be either an entity or a relation. For example, (/m/0f8l9c, location.country.capital,/m/05qtj)4 describes the fact that the capital of France is Paris, where /m/0f8l9c and /m/05qtj are entities denoting France and Paris respective3We also adopt a margin strategy to obtain multiple answers for a question and this will be explained in the next section. 4Note that the Freebase prefixes are omitted for neatness. ly, and location.country.capital is a relation. 3 Our Approach 3.1 Candidate Generation All the entities in Freebase should be candidate answers ideally, but in practice, this is time consuming and not really necessary. For each question q, we use Freebase API (Bollacker et al., 2008) to identify a topic entity, which could be simply understood as the main entity of the question. For example, France is the topic entity of question “Who is the president of France?”. Freebase API method is able to resolve as many as 86% questions if we use the top1 result (Yao and Van Durme, 2014). After getting the topic entity, we collect all the entities directly connected to it and the ones connected with 2-hop5. These entities constitute a candidate set Cq . 3.2 The Neural Cross-Attention Model We present a cross-attention based neural network, which represents the question dynamically according to different answer aspects, also considering their connections. Concretely, each aspect of the answer focuses on different words of the question and thus decides how the question is represented. Then the question pays different attention to each answer aspect to decide their weights. Figure 2 is the architecture of our model. We will illustrate how the system works as follows. 3.2.1 Question Representation First of all, we have to obtain the representation of each word in the question. These representations retain all the information of the question, and could serve the following steps. Suppose question q is expressed as q = (x1, x2, ..., xn), where xi denotes the ith word. As shown in Figure 2, we first look up a word embedding matrix Ew ∈Rd×vw to get the word embeddings, which is randomly initialized, and updated during the training process. Here, d means the dimension of the embeddings and vw denotes the vocabulary size of natural language words. Then, the embeddings are fed into a long shortterm memory (LSTM) (Hochreiter and Schmidhuber, 1997) networks. LSTM has been proven to 5For example, (/m/0f8l9c, governing officials, government.position held.office holder, /m/02qg4z) is a 2-top connection. 223 𝑥1 𝑞 𝑥2 𝑥6 𝑥5 𝑥3 𝑥4 𝑎𝑡 𝑎𝑟 𝑎𝑒 𝑎𝑐 Word Embedding Matrix 𝐸𝑤 KB Embedding Matrix 𝐸𝑘 ℎ1 ℎ2 ℎ3 ℎ4 ℎ5 ℎ6 𝑞 𝛼1 𝛼2 𝛼3 𝛼4 𝛼5 𝛼6 𝑞1 A-Q Attention Model 𝑒𝑐 + + + = 𝑆(𝑞, 𝑎) 𝛽1 𝛽2 𝛽3 𝛽4 Q-A Attention Model Mean over time Bidirectional LSTM 𝑒𝑡 𝑒𝑟 𝑒𝑒 Figure 2: The architecture of the proposed crossattention based neural network. Note that only one aspect(in orange color) is depicted for clarity. The other three aspects follow the same way. be effective in many natural language processing (NLP) tasks such as machine translation (Sutskever et al., 2014) and dependency parsing (Dyer et al., 2015), and it is adept in harnessing long sentences. Note that if we use unidirectional LSTM, the outcome of a specific word contains only the information of the words before it, whereas the words after it are not taken into account. To avoid this, we employ bidirectional LSTM as Bahdanau (2015) does, which consists of both forward and backward networks. The forward LSTM handles the question from left to right, and the backward LSTM processes in the reverse order. Thus, we could acquire two hidden state sequences, one from the forward one (−→ h1, −→ h2, ..., −→ hn) and the other from the backward one (←− h1, ←− h2, ..., ←− hn). We concatenate the forward hidden state and the backward hidden state of each word, resulting in hj = [−→ hj; ←− hj]. The hidden unit of forward and backward LSTM is d 2, so the concatenated vector is of dimension d. In this way, we obtain the representation of each word in the question. 3.2.2 Answer aspect representation We directly use the embedding for each answer aspect through the KB embedding matrix Ek ∈ Rd×vk. Here, vk means the vocabulary size of the KB resources. The embedding matrix is randomly initialized and learned during training, and could be further enhanced with the help of global information as described in Section 3.3. Concretely, we employ four kinds of answer aspects: answer entity ae, answer relation ar, answer type at and answer context ac6. Their embeddings are denoted as ee, er, et and ec, respectively. It is worth noting that the answer context consists of multiple KB resources, and we denote it as (c1, c2, ..., cm). We first acquire their KB embeddings (ec1, ec2, ..., ecm) through Ek, then calculate an average embedding by ec = 1 m m P i=1 eci. 3.2.3 Cross-Attention model The most crucial part of the proposed approach is the cross-attention mechanism. The crossattention mechanism is composed of two parts: the answer-towards-question attention part and the question-towards-answer attention part. The proposed cross-attention model could also be intuitively interpreted as a re-reading mechanism (Hermann et al., 2015). Our aim is to select correct answers from a candidate set. When we judge a candidate answer, suppose we first look at its type, and we will reread the question to find out which part of the question should be more focused (handling attention). Then we go to next aspect and reread the question again, until the all the aspects are utilized. After we read all the answer aspects and get all the scores, the final similarity score between question and answer should be a weighted sum of all the scores. We believe that this mechanism is beneficial for the system to better understand the question with the help of the answer aspects, and it may lead to a performance promotion. • Answer-towards-question(A-Q) attention Based on our assumption, each answer aspect should focus on different words of the same question. The extent of attention can be measured by the relatedness between each word representation hj and an answer aspect embedding ei. We propose the following formulas to calculate the weights. αij = exp(ωij) nP k=1 exp(ωik) (1) ωij = f(W T [hj; ei] + b) (2) Here, αij denotes the weight of attention from answer aspect ei to the jth word in the question, where ei ∈{ee, er, et, ec}. f(·) is a non-linear activation function, such as hyperbolic tangent transformation here. Let n be the length of the question. W ∈R2d×d is an intermediate matrix and b 6Answer context is the 1-hop entities and predicates which connect to the answer entity along the answer path. 224 is the offset. Both of them are randomly initialized and updated during training. Subsequently, according to the specific answer aspect ei, the attention weights are employed to calculate a weighted sum of the hidden representations, resulting in a semantic vector that represent the question. qi = n X j=1 αijhj (3) The similarity score of the question q and this particular candidate answer aspect ei (ei ∈ {ee, er, et, ec}) could be defined as follows. S (q, ei) = h(qi, ei) (4) The scoring function h(·) is computed as the inner product between the sentence representation qi, which has already carried the attention from the answer aspect part, and the corresponding answer aspect ei, and is parametrized into the network and updated during the training process. • Question-towards-answer(Q-A) attention Intuitively, different question should value the four answer aspect differently. Since we have already calculated the scores of (q, ei), we define the final similarity score of the question q and each candidate answer a as follows. S (q, a) = X ei∈{ee,er,et,ec} βeiS (q, ei) (5) βei = exp (ωei) P ek∈{ee,er,et,ec} exp (ωek) (6) ωei = f W T [q; ei] + b  (7) q = 1 n Xn j hj (8) Here βei denotes the attention of question towards answer aspects, indicating which answer aspect should be more focused in one (q, a) pair. W ∈R2d×d is also a intermediate matrix as in the answer-towards-question attention part, and b is an offset value.7 q is calculated by averagely pooling all the bi-directional LSTM hidden state sequences, resulting a vector which represents the question to determine which answer aspect should be more focused. 7Note that the W and b in the two attention part is different and independent. 3.2.4 Training We first construct the training data. Since we have (q, a) pairs as supervision data, candidate set Cq can be divided into two subsets, namely, correct answer set Pq and wrong answer set Nq. For each correct answer a ∈Pq, we randomly select k wrong answers a′ ∈Nq as negative examples. For some topic entities, there may be not enough wrong answers to acquire k wrong answers. Under this circumstance, we extend Nq from other randomly selected candidate set C′ q. With the generated training data, we are able to make use of pairwise training. The training loss is given as follows, which is a hinge loss. Lq,a,a′ = [γ + S (q, a′) −S (q, a)]+ (9) where γ is a positive real number that ensures a margin between positive and negative examples. And [z]+ means max(0, z). The intuition of this training strategy is to guarantee the score of positive question-answer pairs to be higher than negative ones with a margin. The objective function is as follows. min X q 1 |Pq| X a∈Pq X a′∈Nq Lq,a,a′ (10) We adopt stochastic gradient descent (SGD) to minimize the learning process, shuffled minibatches are utilized. 3.2.5 Inference In testing stage, given the candidate answer set Cq, we have to calculate S(q, a) for each a ∈Cq, and find out the maximum value Smax. Smax = arg max a∈Cq {S (q, a)} (11) It is worth noting that many questions have more than one answer, so it is improper to set the candidate answer which have the maximum value as the final answer. Instead, we take advantage of the margin γ. If the score of a candidate answer is within the margin compared with Smax, we put it in the final answer set. A = {ˆa|Smax −S (q, ˆa) < γ} (12) 3.3 Combining Global Knowledge In this section, we elaborate how the global information of a KB could be leveraged. As stated before, we try to take into account the complete knowledge information of the KB. To this 225 end, we adopt TransE model (Bordes et al., 2013) and integrate its outcome into our training process. In TransE, relations are considered as translations in the embedding space. For consistency, we denote each fact as (s, p, o). TransE utilizes pairwise training strategy as well. Randomly sampled corrupted facts (s′, p, o′) are the negative examples. The distance measure d(s + p, o) is defined as ∥s + p −o∥2 2. And the training loss is given as follows. Lk = X (s,p,o)∈S X (s′,p,o′)∈S′ [γk + d (s + p, o) −d s′ + p, o′ ]+ (13) Where S is the set of KB facts and S′ is the corrupted facts. In our QA task, we filter out the completely unrelated facts to save time. Specifically, we first collect all the topic entities of all the questions as initial set. Then, we expand the set by adding directly connected and 2-hop entities. Finally, all the facts containing these entities form the positive set, and the negative facts are randomly corrupted. This is a compromising solution due to the large scale of Freebase. To employ the global information in our training process, we adopt a multi-task training strategy. Specifically, we perform KB-QA training and TransE training in turn. The proposed training process ensures that the global KB information acts as additional supervision, and the interconnections among the resources are fully considered. In addition, as more KB resources are involved, the OOV problem is relieved. Since all the OOV resources have exactly the same attention towards a question, it will weaken the effectiveness of the attention model. So the alleviation of OOV is able to bring additional benefits to the attention model. 4 Experiments To evaluate the proposed method, we conduct experiments on WebQuestions (Berant et al., 2013) dataset that includes 3,778 question-answer pairs for training and 2,032 for testing. The questions are collected from Google Suggest API, and the answers are labeled manually by Amazon MTurk. All the answers are from Freebase. We use threequarter of the training data as training set, and the left as validate set. We use F1 score as evaluation matric, and the average result is computed by the script provided by Berant et al. (2013). Note that our proposed approach is an entire end-to-end method, which totally depends on training data. It is worth noting that Yih et al. (2015; 2016) achieve much higher F1 scores than other methods. Their staged system is able to address more questions with constraints and aggregations. However, their approach applies numbers of manually designed rules and features, which come from the observations on the training set questions. These particular manual efforts reduce the adaptability of their approach. Moreover, there are some integrated systems such as Xu et al. (2016a; 2016b) achieve higher F1 scores which leverage Wikipedia free text as external knowledge, so their systems are not directly comparable to ours. 4.1 Settings For KB-QA training, we use mini-batch stochastic gradient descent to minimize the pairwise training loss. The minibatch size is set to 100. The learning rate is set to 0.01. Both the word embedding matrix Ew and KB embedding matrix Ev are normalized after each epoch. The embedding size d = 512, then the hidden unit size is 256. Margin γ is set to 0.6. Negative example number k = 2000. We set the embedding dimension to 512 in TransE training process, and the minibatch size is also 100. γk is set to 1. All these hyperparameters of the proposed network is determined according to the performance on the validate set. 4.2 Results The effectiveness of the proposed approach To demonstrate the effectiveness of the proposed approach, we compare our method with state-of-the-art end-to-end NN-based methods. Methods Avg F1 Bordes et al., 2014b 29.7 Bordes et al., 2014a 39.2 Yang et al., 2014 41.3 Dong et al., 2015 40.8 Bordes et al., 2015 42.2 our approach 42.9 Table 1: The evaluation results on WebQuestions. Table 1 shows the results on WebQuestions dataset. Bordes et al. (2014b) apply BOW method to obtain a single vector for both questions and answers. Bordes et al. (2014a) further improve their work by proposing the concept of subgraph embeddings. Besides the answer path, the sub226 graph contains all the entities and relations connected to the answer entity. The final vector is also obtained by bag-of-words strategy. Yang et al. (2014) follow the SP-based manner, but uses embeddings to map entities and relations into KB resources, then the question can be converted into logical forms. They jointly consider the two mapping processes. Dong et al. (2015) use three columns of Convolutional Neural Networks (CNNs) to represent questions corresponding to three aspects of the answers, namely the answer context, the answer path and the answer type. Bordes et al. (2015) put KB-QA into the memory networks framework (Sukhbaatar et al., 2015), and achieves the state-of-the-art performance of endto-end methods. Our approach employs bidirectional LSTM, cross-attention model and global KB information. From the results, we observe that our approach achieves the best performance of all the end-to-end methods on WebQuestions. Bordes et al. (2014b; 2014a; 2015) all utilize BOW model to represent the questions, while ours takes advantage of the attention of answer aspects to dynamically represent the questions. Also note that Bordes et al. (2015) uses additional training data such as Reverb (Fader et al., 2011) and their original dataset SimpleQuestions. Dong et al. (2015) employs three fixed CNNs to represent questions, while ours is able to express the focus of each unique answer aspect to the words in the question. Besides, our approach employs the global KB information. So, we believe that the results faithfully show that the proposed approach is more effective than the other competitive methods. Model Analysis In this part, we further discuss the impacts of the components of our model. Table 2 indicates the effectiveness of different parts in the model. Methods Avg F1 LSTM 38.2 Bi LSTM 39.1 Bi LSTM+A-Q-ATT 41.6 Bi LSTM+C-ATT 41.8 Bi LSTM+GKI 40.4 Bi LSTM+A-Q-ATT+GKI 42.6 Bi LSTM+C-ATT+GKI 42.9 Table 2: The ablation results of our models. LSTM employs unidirectional LSTM, and uses the last hidden state as the question representation. Bi LSTM adopts a bidirectional LSTM. A-Q-ATT denotes the answer-towards-question attention part, and C-ATT stands for our crossattention. GKI means global knowledge information. Bi LSTMS+C-ATT+GKI is our full proposed approach. From the results, we could observe the following. 1) Bi LSTM+C-ATT dramatically improves the F1 score by 2.7 points compared with Bi LSTM, 0.2 points higher than Bi LSTM+A-Q-ATT. Similarly, Bi LSTM+C-ATT+GKI significantly outperforms Bi LSTM+GKI by 2.5 points, improving Bi LSTM+A-Q-ATT+GKI by 0.3 points. The results prove that the proposed cross-attention model is effective. 2) Bi LSTM+GKI performs better than Bi LSTM, and achieves an improvement of 1.3 points. Similarly, Bi LSTM+C-ATT+GKI improves Bi LSTM+C-ATT by 1.1 points, which indicates that the proposed training strategy successfully leverages the global information of the underlying KB. 3) Bi LSTM+C-ATT+GKI achieves the best performance as we expected, and improves the original Bi LSTM dramatically by 3.8 points. This directly shows the power of the attention model and the global KB information. To illustrate the effectiveness of the attention mechanism clearly, we present the attention weights of a question in the form of heat map as shown in Figure 3. where is the carpathian mountain range located answer entity answer type answer relation answer context Figure 3: The visualized attention heat map. Answer entity: /m/06npd(Slovakia), answer relation: partially containedby, answer type: /location/country, answer context: (/m/04dq9kf, /m/01mp, ...) From this example we observe that our methods is able to capture the attention properly. It is instructive to figure out the attention part of the question when dealing with different answer aspects. The heat map will help us understand which parts are most useful for selecting correct answers. For instance, from Figure 3, we can see that location.country is paying great attention 227 to “Where”, indicating that “Where” is much more important than the other parts in the question when dealing with this type. In other words, the other parts are not that crucial since “Where” is strongly implying that the question is asking about a location. As for Q-A attention part, we see that answer type and answer relation are more important than other answer aspects in this example. 4.3 Error Analysis We randomly sample 100 imperfectly answered questions and categorize the errors into two main classes as follows. Wrong attention In some occasions (18 in 100 questions, 18%), we find the generated attention weights unreasonable. For instance, for question “What are the songs that Justin Bieber wrote?”, answer type /music/composition pays the most attention on “What” rather than “songs”. We think this is due to the bias of the training data, and we believe these errors could be solved by introducing more instructive training data. Complex questions and label errors Another challenging problem is the complex questions (35%). For example, “When was the last time Knicks won the championship?” is actually to ask the last championship, but the predicted answers give all the championships. This is due to that the model cannot learn what “last” mean in the training process. In addition, the label mistakes also influence the evaluation (3%), such as, “What college did John Nash teach at?”, where the labeled answer is Princeton University, but Massachusetts Institute of Technology should also be an answer, and the proposed method is able to answer it correctly. Other errors include topic entity generation error and the multiple answers error (giving more answers than expected). We guess these errors are caused by the simple implementations of the related steps in our method, and we will not explain them in detail. 5 Related Work The past years have seen a growing amount of research on KB-QA, shaping an interaction paradigm that allows end users to profit from the expressive power of Semantic Web data while at the same time hiding their complexity behind an intuitive and easy-to-use interface. At the same time the growing amount of data has led to a heterogeneous data landscape where QA systems struggle to keep up with the volume, variety and veracity of the underlying knowledge. 5.1 Neural Network-based KB-QA In recent years, deep neural networks have been applied to many NLP tasks, showing promising results. Bordes et al. (2014b) was the first to introduce NN-based method to solve KB-QA problem. The questions and KB triples were represented by vectors in a low dimensional space. Thus the cosine similarity could be used to find the most possible answer. BOW method was employed to obtain a single vector for both the questions and the answers. Pairwise training was utilized, and the negative examples were randomly selected from the KB facts. Bordes et al. (2014a) further improved their work by proposing the concept of subgraph embeddings. The key idea was to involve as much information as possible in the answer end. Besides the answer triple, the subgraph contained all the entities and relations connected to the answer entity. The final vector was also obtained by bag-of-words strategy. Yih et al. (2014) focused on single-relation questions. The KB-QA task was divided into two steps. Firstly, they found the topic entity of the question. Then, the rest of the question was represented by CNNs and used to match relations. Yang et al. (2014) tackled entity and relation mapping as joint procedures. Actually, these two methods followed the SP-based manner, but they took advantage of neural networks to obtain intermediate mapping results. The most similar work to ours is Dong et al. (2015). They considered the different aspects of answers, using three columns of CNNs to represent questions respectively. The difference is that our approach uses cross-attention mechanism for each unique answer aspect, so the question representation is not fixed to only three types. Moreover, we utilize the global KB information. Xu et al. (2016a; 2016b) proposed integrated systems to address KB-QA problems incorporating Wikipedia free text, in which they used multichannel CNNs to extract relations. 5.2 Attention-based Model The attention mechanism has been widely used in different areas. Bahdanau et al. (2015) first applied attention model in NLP. They improved 228 the encoder-decoder Neural Machine Translation (NMT) framework by jointly learning align and translation. They argued that representing source sentence by a fixed vector is unreasonable, and proposed a soft-align method, which could be understood as attention mechanism. Rush et al. (2015) implemented sentence-level summarization task. They utilized local attention-based model that generated each word of the summary conditioned on the input sentence. Wang et al. (2016) proposed an inner attention mechanism that the attention was imposed directly to the input. And their experiment on answer selection showed the advantage of inner attention compared with traditional attention methods. Yin et al. (2016) tackled simple question answering by an attentive convolutional neural network. They stacked an attentive max-pooling above convolution layer to model the relationship between predicates and question patterns. Our approach differs from previous work in that we use attentions to help represent questions dynamically, not generating current word from vocabulary as before. 6 Conclusion In this paper, we focus on KB-QA task. Firstly, we consider the impacts of the different answer aspects when representing the question, and propose a novel cross-attention model for KB-QA. Specifically, we employ the focus of the answer aspects to each question word and the attention weights of the question towards the answer aspects. This kind of dynamic representation is more precise and flexible. Secondly, we leverage the global KB information, which could take full advantage of the complete KB, and also alleviate the OOV problem for the attention model. The extensive experiments demonstrate that the proposed approach could achieve better performance compared with state-of-the-art end-to-end methods. Acknowledgments This work was supported by the Natural Science Foundation of China (No.61533018) and the National Program of China (973 program No. 2014CB340505). And this research work was also supported by Google through focused research awards program. We would like to thank the anonymous reviewers for their useful comments and suggestions. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. Proceedings of ICLR,2015 . Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1533–1544. http://aclweb.org/anthology/D13-1160. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. ACM, pages 1247–1250. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014a. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 615–620. https://doi.org/10.3115/v1/D14-1067. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 . Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems. pages 2787–2795. Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014b. Open question answering with weakly supervised embedding models. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, pages 165–180. Qingqing Cai and Alexander Yates. 2013. Largescale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 423–433. http://aclweb.org/anthology/P13-1042. Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2015. Question answering over freebase with multicolumn convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 260–269. https://doi.org/10.3115/v1/P15-1026. 229 Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and A. Noah Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 334–343. https://doi.org/10.3115/v1/P151033. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1535–1545. http://aclweb.org/anthology/D11-1142. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693– 1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1545–1556. http://aclweb.org/anthology/D13-1161. Eric Prudhommeaux and Andy Seaborne. 2008. Sparql query language for rdf. w3c recommendation, january 2008. Siva Reddy, Oscar T¨ackstr¨om, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, and Mirella Lapata. 2016. Transforming dependency structures to logical forms for semantic parsing. Transactions of the Association of Computational Linguistics 4:127–141. http://aclweb.org/anthology/Q16-1010. M. Alexander Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 379–389. https://doi.org/10.18653/v1/D15-1044. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems. pages 2440–2448. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Christina Unger, Andr´e Freitas, and Philipp Cimiano. 2014. An introduction to question answering over linked data. In Reasoning Web International Summer School. Springer, pages 100–140. Bingning Wang, Kang Liu, and Jun Zhao. 2016. Inner attention based recurrent neural networks for answer selection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1288–1297. https://doi.org/10.18653/v1/P16-1122. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016b. Hybrid question answering over knowledge base and free text. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 2397–2407. http://aclweb.org/anthology/C16-1226. Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016a. Question answering on freebase via relation extraction and textual evidence. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 2326– 2336. https://doi.org/10.18653/v1/P16-1220. Min-Chul Yang, Nan Duan, Ming Zhou, and HaeChang Rim. 2014. Joint relational embeddings for knowledge-based question answering. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 645–650. https://doi.org/10.3115/v1/D14-1071. Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 956–966. https://doi.org/10.3115/v1/P14-1090. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1321–1331. https://doi.org/10.3115/v1/P151128. Wen-tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation question answering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 643–648. https://doi.org/10.3115/v1/P14-2105. Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of 230 semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 201–206. https://doi.org/10.18653/v1/P16-2033. Wenpeng Yin, Mo Yu, Bing Xiang, Bowen Zhou, and Hinrich Sch¨utze. 2016. Simple question answering by attentive convolutional neural network. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 1746–1756. http://aclweb.org/anthology/C16-1164. Luke Zettlemoyer and Michael Collins. 2009. Learning context-dependent mappings from sentences to logical form. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, pages 976–984. http://aclweb.org/anthology/P09-1110. Luke S Zettlemoyer and Michael Collins. 2012. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. arXiv preprint arXiv:1207.1420 . 231
2017
21
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 232–242 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1022 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 232–242 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1022 Translating Neuralese Jacob Andreas Anca Dragan Dan Klein Computer Science Division University of California, Berkeley {jda,anca,klein}@cs.berkeley.edu Abstract Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents’ messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.1 1 Introduction Several recent papers have described approaches for learning deep communicating policies (DCPs): decentralized representations of behavior that enable multiple agents to communicate via a differentiable channel that can be formulated as a recurrent neural network. DCPs have been shown to solve a variety of coordination problems, including reference games (Lazaridou et al., 2016b), logic puzzles (Foerster et al., 2016), and simple control (Sukhbaatar et al., 2016). Appealingly, the agents’ communication protocol can be learned via direct 1 We have released code and data at http://github. com/jacobandreas/neuralese. z(1) a z(2) a z(1) b z(2) b Figure 1: Example interaction between a pair of agents in a deep communicating policy. Both cars are attempting to cross the intersection, but cannot see each other. By exchanging message vectors z(t), the agents are able to coordinate and avoid a collision. This paper presents an approach for understanding the contents of these message vectors by translating them into natural language. backpropagation through the communication channel, avoiding many of the challenging inference problems associated with learning in classical decentralized decision processes (Roth et al., 2005). But analysis of the strategies induced by DCPs has remained a challenge. As an example, Figure 1 depicts a driving game in which two cars, which are unable to see each other, must both cross an intersection without colliding. In order to ensure success, it is clear that the cars must communicate with each other. But a number of successful communication strategies are possible—for example, they might report their exact (x, y) coordinates at every timestep, or they might simply announce whenever they are entering and leaving the intersection. If these messages were communicated in natural language, it would be straightforward to determine which strategy was being employed. However, DCP agents instead communicate with an automatically induced protocol of unstructured, real-valued recurrent state vectors—an artificial language we might call “neuralese,” which superficially bears little resemblance to natural language, and thus frustrates attempts at direct interpretation. 232 We propose to understand neuralese messages by translating them. In this work, we present a simple technique for inducing a dictionary that maps between neuralese message vectors and short natural language strings, given only examples of DCP agents interacting with other agents, and humans interacting with other humans. Natural language already provides a rich set of tools for describing beliefs, observations, and plans—our thesis is that these tools provide a useful complement to the visualization and ablation techniques used in previous work on understanding complex models (Strobelt et al., 2016; Ribeiro et al., 2016). While structurally quite similar to the task of machine translation between pairs of human languages, interpretation of neuralese poses a number of novel challenges. First, there is no natural source of parallel data: there are no bilingual “speakers” of both neuralese and natural language. Second, there may not be a direct correspondence between the strategy employed by humans and DCP agents: even if it were constrained to communicate using natural language, an automated agent might choose to produce a different message from humans in a given state. We tackle both of these challenges by appealing to the grounding of messages in gameplay. Our approach is based on one of the core insights in natural language semantics: messages (whether in neuralese or natural language) have similar meanings when they induce similar beliefs about the state of the world. Based on this intuition, we introduce a translation criterion that matches neuralese messages with natural language strings by minimizing statistical distance in a common representation space of distributions over speaker states. We explore several related questions: • What makes a good translation, and under what conditions is translation possible at all? (Section 4) • How can we build a model to translate between neuralese and natural language? (Section 5) • What kinds of theoretical guarantees can we provide about the behavior of agents communicating via this translation model? (Section 6) Our translation model and analysis are general, and in fact apply equally to human–computer and large bird black wings black crown agent translator agent translator small brown light brown dark brown Figure 2: Overview of our approach—best-scoring translations generated for a reference game involving images of birds. The speaking agent’s goal is to send a message that uniquely identifies the bird on the left. From these translations it can be seen that the learned model appears to discriminate based on coarse attributes like size and color. human–human translation problems grounded in gameplay. In this paper, we focus our experiments specifically on the problem of interpreting communication in deep policies, and apply our approach to the driving game in Figure 1 and two reference games of the kind shown in Figure 2. We find that this approach outperforms a more conventional machine translation criterion both when attempting to interoperate with neuralese speakers and when predicting their state. 2 Related work A variety of approaches for learning deep policies with communication were proposed essentially simultaneously in the past year. We have broadly labeled these as “deep communicating policies”; concrete examples include Lazaridou et al. (2016b), Foerster et al. (2016), and Sukhbaatar et al. (2016). The policy representation we employ in this paper is similar to the latter two of these, although the general framework is agnostic to low-level modeling details and could be straightforwardly applied to other architectures. Analysis of communication strategies in all these papers has been largely adhoc, obtained by clustering states from which similar messages are emitted and attempting to manually assign semantics to these clusters. The present work aims at developing tools for performing this analysis automatically. Most closely related to our approach is that of Lazaridou et al. (2016a), who also develop a model for assigning natural language interpretations to learned messages; however, this approach relies on supervised cluster labels and is targeted specifically towards referring expression games. Here we attempt to develop an approach that can handle general multiagent interactions without assuming a prior discrete structure in space of observations. 233 The literature on learning decentralized multiagent policies in general is considerably larger (Bernstein et al., 2002; Dibangoye et al., 2016). This includes work focused on communication in multiagent settings (Roth et al., 2005) and even communication using natural language messages (Vogel et al., 2013b). All of these approaches employ structured communication schemes with manually engineered messaging protocols; these are, in some sense, automatically interpretable, but at the cost of introducing considerable complexity into both training and inference. Our evaluation in this paper investigates communication strategies that arise in a number of different games, including reference games and an extended-horizon driving game. Communication strategies for reference games were previously explored by Vogel et al. (2013a), Andreas and Klein (2016) and Kazemzadeh et al. (2014), and reference games specifically featuring end-to-end communication protocols by Yu et al. (2016). On the control side, a long line of work considers nonverbal communication strategies in multiagent policies (Dragan and Srinivasa, 2013). Another group of related approaches focuses on the development of more general machinery for interpreting deep models in which messages have no explicit semantics. This includes both visualization techniques (Zeiler and Fergus, 2014; Strobelt et al., 2016), and approaches focused on generating explanations in the form of natural language (Hendricks et al., 2016; Vedantam et al., 2017). 3 Problem formulation Games Consider a cooperative game with two players a and b of the form given in Figure 3. At every step t of this game, player a makes an observation x(t) a and receives a message z(t−1) b from b. It then takes an action u(t) a and sends a message z(t) a to b. (The process is symmetric for b.) The distributions p(ua|xa, zb) and p(za|xa) together define a policy π which we assume is shared by both players, i.e. p(ua|xa, zb) = p(ub|xb, za) and p(za|xa) = p(zb|xb). As in a standard Markov decision process, the actions (u(t) a , u(t) b ) alter the world state, generating new observations for both players and a reward shared by both. The distributions p(z|x) and p(u|x, z) may also be viewed as defining a language: they specify how a speaker will generate messages based on world states, and how a listener will respond to these mesa b x(1) a x(1) b x(2) b u(1) a u(2) a u(2) b u(1) b z(1) a z(2) a z(1) b z(2) b a b x(2) a 0.3: stop 0.5: forward 0.1: left 0.1: right observations actions messages Figure 3: Schematic representation of communication games. At every timestep t, players a and b make an observation x(t) and receive a message z(t−1), then produce an action u(t) and a new message z(t). sages. Our goal in this work is to learn to translate between pairs of languages generated by different policies. Specifically, we assume that we have access to two policies for the same game: a “robot policy” πr and a “human policy” πh. We would like to use the representation of πh, the behavior of which is transparent to human users, in order to understand the behavior of πr (which is in general an uninterpretable learned model); we will do this by inducing bilingual dictionaries that map message vectors zr of πr to natural language strings zh of πh and vice-versa. Learned agents πr Our goal is to present tools for interpretation of learned messages that are agnostic to the details of the underlying algorithm for acquiring them. We use a generic DCP model as a basis for the techniques developed in this paper. Here each agent policy is represented as a deep recurrent Q network (Hausknecht and Stone, 2015). This network is built from communicating cells of the kind depicted in Figure 4. At every timestep, this agent receives three pieces of information: an x(t) a z(t−1) b h(t−1) a h(t) a u(t) a z(t) a MLP GRU Figure 4: Cell implementing a single step of agent communication (compare with Sukhbaatar et al. (2016) and Foerster et al. (2016)). MLP denotes a multilayer perceptron; GRU denotes a gated recurrent unit (Cho et al., 2014). Dashed lines represent recurrent connections. 234 observation of the current state of the world, the agent’s memory vector from the previous timestep, and a message from the other player. It then produces three outputs: a predicted Q value for every possible action, a new memory vector for the next timestep, and a message to send to the other agent. Sukhbaatar et al. (2016) observe that models of this form may be viewed as specifying a single RNN in which weight matrices have a particular block structure. Such models may thus be trained using the standard recurrent Q-learning objective, with communication protocol learned end-to-end. Human agents πh The translation model we develop requires a representation of the distribution over messages p(za|xa) employed by human speakers (without assuming that humans and agents produce equivalent messages in equivalent contexts). We model the human message generation process as categorical, and fit a simple multilayer perceptron model to map from observations to words and phrases used during human gameplay. 4 What’s in a translation? What does it mean for a message zh to be a “translation” of a message zr? In standard machine translation problems, the answer is that zh is likely to co-occur in parallel data with zr; that is, p(zh|zr) is large. Here we have no parallel data: even if we could observe natural language and neuralese messages produced by agents in the same state, we would have no guarantee that these messages actually served the same function. Our answer must instead appeal to the fact that both natural language and neuralese messages are grounded in a common environment. For a given neuralese message zr, we will first compute a grounded representation of that message’s meaning; to translate, we find a natural-language message whose meaning is most similar. The key question is then what form this grounded meaning representation should take. The existing literature suggests two broad approaches: Semantic representation The meaning of a message za is given by its denotations: that is, by the set of world states of which za may be felicitously predicated, given the existing context available to a listener. In probabilistic terms, this says that the meaning of a message za is represented by the distribution p(xa|za, xb) it induces over speaker states. Examples of this approach include Guerin and Pitt (2001) and Pasupat and Liang (2016). Pragmatic representation The meaning of a message za is given by the behavior it induces in a listener. In probabilistic terms, this says that the meaning of a message za is represented by the distribution p(ub|za, xb) it induces over actions given the listener’s observation xb. Examples of this approach include Vogel et al. (2013a) and Gauthier and Mordatch (2016). These two approaches can give rise to rather different behaviors. Consider the following example: square hexagon circle few many many The top language (in blue) has a unique name for every kind of shape, while the bottom language (in red) only distinguishes between shapes with few sides and shapes with many sides. Now imagine a simple reference game with the following form: player a is covertly assigned one of these three shapes as a reference target, and communicates that reference to b; b must then pull a lever labeled large or small depending on the size of the target shape. Blue language speakers can achieve perfect success at this game, while red language speakers can succeed at best two out of three times. How should we translate the blue word hexagon into the red language? The semantic approach suggests that we should translate hexagon as many: while many does not uniquely identify the hexagon, it produces a distribution over shapes that is closest to the truth. The pragmatic approach instead suggests that we should translate hexagon as few, as this is the only message that guarantees that the listener will pull the correct lever large. So in order to produce a correct listener action, the translator might have to “lie” and produce a maximally inaccurate listener belief. If we were exclusively concerned with building a translation layer that allowed humans and DCP agents to interoperate as effectively as possible, it would be natural to adopt a pragmatic representation strategy. But our goals here are broader: we also want to facilitate understanding, and specifically to help users of learned systems form true beliefs about the systems’ computational processes and representational abstractions. The example above demonstrates that “pragmatically” optimizing directly for task performance can sometimes lead to translations that produce inaccurate beliefs. 235 We instead build our approach around semantic representations of meaning. By preserving semantics, we allow listeners to reason accurately about the content and interpretation of messages. We might worry that by adopting a semantics-first view, we have given up all guarantees of effective interoperation between humans and agents using a translation layer. Fortunately, this is not so: as we will see in Section 6, it is possible to show that players communicating via a semantic translator perform only boundedly worse (and sometimes better!) than pairs of players with a common language. 5 Translation models In this section, we build on the intuition that messages should be translated via their semantics to define a concrete translation model—a procedure for constructing a natural language ↔neuralese dictionary given agent and human interactions. We understand the meaning of a message za to be represented by the distribution p(xa|za, xb) it induces over speaker states given listener context. We can formalize this by defining the belief distribution β for a message z and context xb as: β(za, xb) = p(xa|za, xb) = p(za|xa)p(xb|xa) P x′a p(za|x′a)p(xb|x′a) Here we have modeled the listener as performing a single step of Bayesian inference, using the listener state and the message generation model (by assumption shared between players) to compute the posterior over speaker states. While in general neither humans nor DCP agents compute explicit representations of this posterior, past work has found that both humans and suitably-trained neural networks can be modeled as Bayesian reasoners (Frank et al., 2009; Paige and Wood, 2016). This provides a context-specific representation of belief, but for messages z and z′ to have the same semantics, they must induce the same belief over all contexts in which they occur. In our probabilistic formulation, this introduces an outer expectation over contexts, providing a final measure q of the quality of a translation from z to z′: q(z, z′) = E  DKL(β(z, Xb) || β(z′, Xb)) | z, z′ = X xa,xb p(xa, xb|z, z′)DKL(β(z, xb) || β(z′, xb)) ∝ X xa,xb p(xa, xb) · p(z|xa) · p(z′|xa) · DKL(β(z, xb) || β(z′, xb)); (1) Algorithm 1 Translating messages given: a phrase inventory L function TRANSLATE(z) return arg minz′∈L ˆq(z, z′) function ˆq(z, z′) // sample contexts and distractors xai, xbi ∼p(Xa, Xb) for i = 1..n x′ ai ∼p(Xa|xbi) // compute context weights ˜wi ←p(z|xai) · p(z′|xai) wi ←˜wi/ P j ˜wj // compute divergences ki ←P x∈{xa,x′a} p(z|x) log p(z|x) p(z′|x) return P i wiki recalling that in this setting DKL(β || β′) = X xa p(xa|z, xb) log p(xa|z, xb) p(xa|z′, xb) ∝ X xa p(xa|xb)p(z|xa) log p(z|xa) p(z′|xa) (2) which is zero when the messages z and z′ give rise to identical belief distributions and increases as they grow more dissimilar. To translate, we would like to compute tr(zr) = arg minzh q(zr, zh) and tr(zh) = arg minzr q(zh, zr). Intuitively, Equation 1 says that we will measure the quality of a proposed translation z 7→z′ by asking the following question: in contexts where z is likely to be used, how frequently does z′ induce the same belief about speaker states as z? While this translation criterion directly encodes the semantic notion of meaning described in Section 4, it is doubly intractable: the KL divergence and outer expectation involve a sum over all observations xa and xb respectively; these sums are not in general possible to compute efficiently. To avoid this, we approximate Equation 1 by sampling. We draw a collection of samples (xa, xb) from the prior over world states, and then generate for each sample a sequence of distractors (x′ a, xb) from p(x′ a|xb) (we assume access to both of these distributions from the problem representation). The KL term in Equation 1 is computed over each true sample and its distractors, which are then normalized and averaged to compute the final score. Sampling accounts for the outer p(xa, xb) in Equation 1 and the inner p(xa|xb) in Equation 2. 236 a b xa z xb u Figure 5: Simplified game representation used for analysis in Section 6. A speaker agent sends a message to a listener agent, which takes a single action and receives a reward. The only quantities remaining are of the form p(z|xa). In the case of neuralese, this distribution already is part of the definition of the agent policy πr and can be reused directly. For natural language, we use transcripts of human interactions to fit a model that maps from world states to a distribution over frequent utterances as discussed in Section 3. Details of these model implementations are provided in Appendix B, and the full translation procedure is given in Algorithm 1. 6 Belief and behavior The translation criterion in the previous section makes no reference to listener actions at all. The shapes example in Section 4 shows that some model performance might be lost under translation. It is thus reasonable to ask whether this translation model of Section 5 can make any guarantees about the effect of translation on behavior. In this section we explore the relationship between beliefpreserving translations and the behaviors they produce, by examining the effect of belief accuracy and strategy mismatch on the reward obtained by cooperating agents. To facilitate this analysis, we consider a simplified family of communication games with the structure depicted in Figure 5. These games can be viewed as a subset of the family depicted in Figure 3; and consist of two steps: a listener makes an observation xa and sends a single message z to a speaker, which makes its own observation xb, takes a single action u, and receives a reward. We emphasize that the results in this section concern the theoretical properties of idealized games, and are presented to provide intuition about high-level properties of our approach. Section 8 investigates empirical behavior of this approach on real-world tasks where these ideal conditions do not hold. Our first result is that translations that minimize semantic dissimilarity q cause the listener to take near-optimal actions:2 2Proof is provided in Appendix A. Proposition 1. Semantic translations reward rational listeners. Define a rational listener as one that chooses the best action in expectation over the speaker’s state: U(z, xb) = arg max u X xa p(xa|xb, z)r(xa, xb, u) for a reward function r ∈[0, 1] that depends only on the two observations and the action.3 Now let a be a speaker of a language r, b be a listener of the same language r, and b′ be a listener of a different language h. Suppose that we wish for a and b′ to interact via the translator tr : zr 7→zh (so that a produces a message zr, and b′ takes an action U(zh = tr(zr), xb′)). If tr respects the semantics of zr, then the bilingual pair a and b′ achieves only boundedly worse reward than the monolingual pair a and b. Specifically, if q(zr, zh) ≤D, then Er(Xa, Xb, U(tr(Z)) ≥Er(Xa, Xb, U(Z)) − √ 2D (3) So as discussed in Section 4, even by committing to a semantic approach to meaning representation, we have still succeeded in (approximately) capturing the nice properties of the pragmatic approach. Section 4 examined the consequences of a mismatch between the set of primitives available in two languages. In general we would like some measure of our approach’s robustness to the lack of an exact correspondence between two languages. In the case of humans in particular we expect that a variety of different strategies will be employed, many of which will not correspond to the behavior of the learned agent. It is natural to want some assurance that we can identify the DCP’s strategy as long as some human strategy mirrors it. Our second observation is that it is possible to exactly recover a translation of a DCP strategy from a mixture of humans playing different strategies: Proposition 2. Semantic translations find hidden correspondences. Consider a fixed robot policy πr and a set of human policies {πh1, πh2, . . . } (recalling from Section 3 that each π is defined by distributions 3This notion of rationality is a fairly weak one: it permits many suboptimal communication strategies, and requires only that the listener do as well as possible given a fixed speaker— a first-order optimality criterion likely to be satisfied by any richly-parameterized model trained via gradient descent. 237 p(z|xa) and p(u|z, xb)). Suppose further that the messages employed by these human strategies are disjoint; that is, if phi(z|xa) > 0, then phj(z|xa) = 0 for all j ̸= i. Now suppose that all q(zr , zh) = 0 for all messages in the support of some phi(z|xa) and > 0 for all j ̸= i. Then every message zr is translated into a message produced by πhi, and messages from other strategies are ignored. This observation follows immediately from the definition of q(zr, zh), but demonstrates one of the key distinctions between our approach and a conventional machine translation criterion. Maximizing p(zh|zr) will produce the natural language message most often produced in contexts where zr is observed, regardless of whether that message is useful or informative. By contrast, minimizing q(zh, zr) will find the zh that corresponds most closely to zr even when zh is rarely used. The disjointness condition, while seemingly quite strong, in fact arises naturally in many circumstances—for example, players in the driving game reporting their spatial locations in absolute vs. relative coordinates, or speakers in a color reference game (Figure 6) discriminating based on lightness vs. hue. It is also possible to relax the above condition to require that strategies be only locally disjoint (i.e. with the disjointness condition holding for each fixed xa), in which case overlapping human strategies are allowed, and the recovered robot strategy is a context-weighted mixture of these. 7 Evaluation 7.1 Tasks In the remainder of the paper, we evaluate the empirical behavior of our approach to translation. Our evaluation considers two kinds of tasks: reference games and navigation games. In a reference game (e.g. Figure 6a), both players observe a pair of candidate referents. A speaker is assigned a target referent; it must communicate this target to a listener, who then performs a choice action corresponding to its belief about the true target. In this paper we consider two variants on the reference game: a simple color-naming task, and a more complex task involving natural images of birds. For examples of human communication strategies for these tasks, we obtain the XKCD color dataset (McMahan and Stone, 2015; Monroe et al., 2016) and the Caltech Birds dataset (Welinder et al., 2010) with accom(a) (b) (c) Figure 6: Tasks used to evaluate the translation model. (a–b) Reference games: both players observe a pair of reference candidates (colors or images); Player a is assigned a target (marked with a star), which player b must guess based on a message from a. (c) Driving game: each car attempts to navigate to its goal (marked with a star). The cars cannot see each other, and must communicate to avoid a collision. panying natural language descriptions (Reed et al., 2016). We use standard train / validation / test splits for both of these datasets. The final task we consider is the driving task (Figure 6c) first discussed in the introduction. In this task, two cars, invisible to each other, must each navigate between randomly assigned start and goal positions without colliding. This task takes a number of steps to complete, and potentially involves a much broader range of communication strategies. To obtain human annotations for this task, we recorded both actions and messages generated by pairs of human Amazon Mechanical Turk workers playing the driving game with each other. We collected close to 400 games, with a total of more than 2000 messages exchanged, from which we held out 100 game traces as a test set. 7.2 Metrics A mechanism for understanding the behavior of a learned model should allow a human user both to correctly infer its beliefs and to successfully interoperate with it; we accordingly report results of both “belief” and “behavior” evaluations. To support easy reproduction and comparison (and in keeping with standard practice in machine 238 translation), we focus on developing automatic measures of system performance. We use the available training data to develop simulated models of human decisions; by first showing that these models track well with human judgments, we can be confident that their use in evaluations will correlate with human understanding. We employ the following two metrics: Belief evaluation This evaluation focuses on the denotational perspective in semantics that motivated the initial development of our model. We have successfully understood the semantics of a message zr if, after translating zr 7→zh, a human listener can form a correct belief about the state in which zr was produced. We construct a simple state-guessing game where the listener is presented with a translated message and two state observations, and must guess which state the speaker was in when the message was emitted. When translating from natural language to neuralese, we use the learned agent model to directly guess the hidden state. For neuralese to natural language we must first construct a “model human listener” to map from strings back to state representations; we do this by using the training data to fit a simple regression model that scores (state, sentence) pairs using a bag-of-words sentence representation. We find that our “model human” matches the judgments of real humans 83% of the time on the colors task, 77% of the time on the birds task, and 77% of the time on the driving task. This gives us confidence that the model human gives a reasonably accurate proxy for human interpretation. Behavior evaluation This evaluation focuses on the cooperative aspects of interpretability: we measure the extent to which learned models are able to interoperate with each other by way of a translation layer. In the case of reference games, the goal of this semantic evaluation is identical to the goal of the game itself (to identify the hidden state of the speaker), so we perform this additional pragmatic evaluation only for the driving game. We found that the most data-efficient and reliable way to make use of human game traces was to construct a “deaf” model human. The evaluation selects a full game trace from a human player, and replays both the human’s actions and messages exactly (disregarding any incoming messages); the evaluation measures the quality of the natural-language-toneuralese translator, and the extent to which the (a) as speaker R H as listener R 1.00 0.50 random 0.70 direct 0.73 belief (ours) H* 0.50 0.83 0.72 0.86 (b) as speaker R H as listener R 0.95 0.50 random 0.55 direct 0.60 belief (ours) H* 0.50 0.77 0.57 0.75 Table 1: Evaluation results for reference games. (a) The colors task. (b) The birds task. Whether the model human is in a listener or speaker role, translation based on belief matching outperforms both random and machine translation baselines. learned agent model can accommodate a (real) human given translations of the human’s messages. Baselines We compare our approach to two baselines: a random baseline that chooses a translation of each input uniformly from messages observed during training, and a direct baseline that directly maximizes p(z′|z) (by analogy to a conventional machine translation system). This is accomplished by sampling from a DCP speaker in training states labeled with natural language strings. 8 Results In all below, “R” indicates a DCP agent, “H” indicates a real human, and “H*” indicates a model human player. Reference games Results for the two reference games are shown in Table 1. The end-to-end trained model achieves nearly perfect accuracy in both magenta, hot, rose, violet, purple magenta, hot, violet, rose, purple olive, puke, pea, grey, brown pinkish, grey, dull, pale, light Figure 7: Best-scoring translations generated for color task. 239 as speaker R H as listener R 0.85 0.50 random 0.45 direct 0.61 belief (ours) H* 0.5 0.77 0.45 0.57 Table 2: Belief evaluation results for the driving game. Driving states are challenging to identify based on messages alone (as evidenced by the comparatively low scores obtained by singlelanguage pairs) . Translation based on belief achieves the best overall performance in both directions. R / R H / H R / H 1.93 / 0.71 — / 0.77 1.35 / 0.64 random 1.49 / 0.67 direct 1.54 / 0.67 belief (ours) Table 3: Behavior evaluation results for the driving game. Scores are presented in the form “reward / completion rate”. While less accurate than either humans or DCPs with a shared language, the models that employ a translation layer obtain higher reward and a greater overall success rate than baselines. cases, while a model trained to communicate in natural language achieves somewhat lower performance. Regardless of whether the speaker is a DCP and the listener a model human or vice-versa, translation based on the belief-matching criterion in Section 5 achieves the best performance; indeed, when translating neuralese color names to natural language, the listener is able to achieve a slightly higher score than it is natively. This suggests that the automated agent has discovered a more effective strategy than the one demonstrated by humans in the dataset, and that the effectiveness of this strategy is preserved by translation. Example translations from the reference games are depicted in Figure 2 and Figure 7. Driving game Behavior evaluation of the driving game is shown in Table 3, and belief evaluation is shown in Table 2. Translation of messages in the driving game is considerably more challenging than in the reference games, and scores are uniformly lower; however, a clear benefit from the beliefmatching model is still visible. Belief matching leads to higher scores on the belief evaluation in both directions, and allows agents to obtain a higher reward on average (though task completion rates remain roughly the same across all agents). Some example translations of driving game messages are shown in Figure 8. at goal done left to top going in intersection proceed going you first following going down Figure 8: Best-scoring translations generated for driving task generated from the given speaker state. 9 Conclusion We have investigated the problem of interpreting message vectors from deep networks by translating them. After introducing a translation criterion based on matching listener beliefs about speaker states, we presented both theoretical and empirical evidence that this criterion outperforms a conventional machine translation approach at recovering the content of message vectors and facilitating collaboration between humans and learned agents. While our evaluation has focused on understanding the behavior of deep communicating policies, the framework proposed in this paper could be much more generally applied. Any encoder– decoder model (Sutskever et al., 2014) can be thought of as a kind of communication game played between the encoder and the decoder, so we can analogously imagine computing and translating “beliefs” induced by the encoding to explain what features of the input are being transmitted. The current work has focused on learning a purely categorical model of the translation process, supported by an unstructured inventory of translation candidates, and future work could explore the compositional structure of messages, and attempt to synthesize novel natural language or neuralese messages from scratch. More broadly, the work here shows that the denotational perspective from formal semantics provides a framework for precisely framing the demands of interpretable machine learning (Wilson et al., 2016), and particularly for ensuring that human users without prior exposure to a learned model are able to interoperate with it, predict its behavior, and diagnose its errors. 240 Acknowledgments JA is supported by a Facebook Graduate Fellowship and a Berkeley AI / Huawei Fellowship. We are grateful to Lisa Anne Hendricks for assistance with the Caltech Birds dataset. References Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Daniel S Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. 2002. The complexity of decentralized control of Markov decision processes. Mathematics of operations research 27(4):819–840. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 . Jilles Steeve Dibangoye, Christopher Amato, Olivier Buffet, and Franc¸ois Charpillet. 2016. Optimally solving Dec-POMDPs as continuous-state MDPs. Journal of Artificial Intelligence Research 55:443– 497. Anca Dragan and Siddhartha Srinivasa. 2013. Generating legible motion. In Robotics: Science and Systems. Jakob Foerster, Yannis M Assael, Nando de Freitas, and Shimon Whiteson. 2016. Learning to communicate with deep multi-agent reinforcement learning. In Advances in Neural Information Processing Systems. pages 2137–2145. Michael C Frank, Noah D Goodman, Peter Lai, and Joshua B Tenenbaum. 2009. Informative communication in word production and word learning. In Proceedings of the 31st annual conference of the cognitive science society. pages 1228–1233. Yang Gao, Oscar Beijbom, Ning Zhang, and Trevor Darrell. 2016. Compact bilinear pooling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 317–326. Jon Gauthier and Igor Mordatch. 2016. A paradigm for situated and goal-driven language learning. arXiv preprint arXiv:1610.03585 . Frank Guerin and Jeremy Pitt. 2001. Denotational semantics for agent communication language. In Proceedings of the fifth international conference on Autonomous agents. ACM, pages 497–504. Matthew Hausknecht and Peter Stone. 2015. Deep recurrent q-learning for partially observable mdps. arXiv preprint arXiv:1507.06527 . Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell. 2016. Generating visual explanations. In European Conference on Computer Vision. Springer, pages 3–19. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara L Berg. 2014. ReferItGame: Referring to objects in photographs of natural scenes. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. pages 787–798. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2016a. Multi-agent cooperation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182 . Angeliki Lazaridou, Nghia The Pham, and Marco Baroni. 2016b. Towards multi-agent communication-based language learning. arXiv preprint arXiv:1605.07133 . Brian McMahan and Matthew Stone. 2015. A Bayesian model of grounded color semantics. Transactions of the Association for Computational Linguistics 3:103–115. Will Monroe, Noah D Goodman, and Christopher Potts. 2016. Learning to generate compositional color descriptions. arXiv preprint arXiv:1606.03821 . Brooks Paige and Frank Wood. 2016. Inference networks for sequential monte carlo in graphical models. volume 48. Panupong Pasupat and Percy Liang. 2016. Inferring logical forms from denotations. arXiv preprint arXiv:1606.06900 . Scott Reed, Zeynep Akata, Honglak Lee, and Bernt Schiele. 2016. Learning deep representations of fine-grained visual descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 49–58. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, pages 1135–1144. Maayan Roth, Reid Simmons, and Manuela Veloso. 2005. Reasoning about joint beliefs for executiontime communication decisions. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems. ACM, pages 786–793. Hendrik Strobelt, Sebastian Gehrmann, Bernd Huber, Hanspeter Pfister, and Alexander M Rush. 2016. Visual analysis of hidden state dynamics in recurrent neural networks. arXiv preprint arXiv:1606.07461 . 241 Sainbayar Sukhbaatar, Rob Fergus, et al. 2016. Learning multiagent communication with backpropagation. In Advances in Neural Information Processing Systems. pages 2244–2252. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. pages 3104–3112. Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. 2017. Context-aware captions from context-agnostic supervision. arXiv preprint arXiv:1701.02870 . Adam Vogel, Max Bodoia, Christopher Potts, and Daniel Jurafsky. 2013a. Emergence of Gricean maxims from multi-agent decision theory. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. pages 1072– 1081. Adam Vogel, Christopher Potts, and Dan Jurafsky. 2013b. Implicatures and nested beliefs in approximate Decentralized-POMDPs. In ACL (2). pages 74–80. P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. 2010. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology. Andrew Gordon Wilson, Been Kim, and William Herlands. 2016. Proceedings of nips 2016 workshop on interpretable machine learning for complex systems. arXiv preprint arXiv:1611.09139 . Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg. 2016. A joint speaker-listener-reinforcer model for referring expressions. arXiv preprint arXiv:1612.09542 . Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision. Springer, pages 818–833. 242
2017
22
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 243–254 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1023 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 243–254 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1023 Obtaining referential word meanings from visual and distributional information: Experiments on object naming Sina Zarrieß and David Schlangen Dialogue Systems Group // CITEC // Faculty of Linguistics and Literary Studies Bielefeld University, Germany {sina.zarriess,david.schlangen}@uni-bielefeld.de Abstract We investigate object naming, which is an important sub-task of referring expression generation on real-world images. As opposed to mutually exclusive labels used in object recognition, object names are more flexible, subject to communicative preferences and semantically related to each other. Therefore, we investigate models of referential word meaning that link visual to lexical information which we assume to be given through distributional word embeddings. We present a model that learns individual predictors for object names that link visual and distributional aspects of word meaning during training. We show that this is particularly beneficial for zero-shot learning, as compared to projecting visual objects directly into the distributional space. In a standard object naming task, we find that different ways of combining lexical and visual information achieve very similar performance, though experiments on model combination suggest that they capture complementary aspects of referential meaning. 1 Introduction Expressions referring to objects in visual scenes typically include a word naming the type of the object: E.g., house in Figure 1 (a), or, as a very general type, thingy in Figure 1 (d). Determining such a name is a crucial step for referring expression generation (REG) systems, as many other decisions concerning, e.g., the selection of attributes follow from it (Dale and Reiter, 1995; Krahmer and Van Deemter, 2012). For a long time, however, research on REG mostly assumed the availability of symbolic representations of ref(a)“house” (b)“buildings” (c)“large structure” (d)“roof thingy” Figure 1: Examples of object names in the REFERIT corpus referring to instances of buildings erent and scene, and sidestepped questions about how speakers actually choose these names, due to the lack of models capable of capturing what a word like house refers to in the real world. Recent advances in image processing promise to fill this gap, with state-of-the-art computer vision systems being able to classify images into thousands of different categories (e.g. Szegedy et al. (2015)). However, classification is not naming (Ordonez et al., 2016). Standard object classification schemes are inherently “flat”, and treat object labels as mutually exclusive (Deng et al., 2014). A state-of-the-art object recognition system would be trained to classify the object in e.g. Figure 1 (a) as either house or building, ignoring the lexical similarity between these two names. In contrast, humans seem to be more flexible as to the chosen level of generality. Depending on the prototypicality of the object to name, and possibly other visual properties, a general name might be more or less appropriate. For instance, a robin can be named bird, but a penguin is better referred 243 to as “penguin” (Rosch, 1978); along the same lines, the rather unusual building in Figure 1 (c) that is not easy to otherwise categorise was named “structure”. Other work at the intersection of image and language processing has investigated models that learn to directly associate visual objects with a continuous representation of word meaning, i.e. through cross-modal transfer into distributional vector spaces (Frome et al., 2013; Norouzi et al., 2013). Here, the idea is to exploit a powerful model of lexical similarity induced from large amounts text for being able to capture inherent lexical relations between object categories. Thus, under the assumption that such semantic spaces represent, in some form at least, taxonomic knowledge, this makes labels on different levels of specificity available for a given object. Moreover, if the mapping is sufficiently general, it should be able to map objects to an appropriate label, even if during training of the mapping this label has not been seen (zero-shot learning). While cross-modal transfer seems to be a conceptually attractive model for learning object names, it is based on an important assumption that, in our view, has not received sufficient attention in previous works: it assumes that a given distributional vector space constitutes an optimal target representation that visual instances of objects can be mapped to. However, distributional representations of word meaning are known to capture a rather fuzzy notion of lexical similarity, e.g. car is similar to van and to street. A cross-modal transfer model is “forced” to learn to map objects into the same area in the semantic space if their names are distributionally similar, but regardless of their actual visual similarity. Indeed, we have found in a recent study that the contribution of distributional information to learning referential word meanings is restricted to certain types of words and does not generalize across the vocabulary (Zarrieß and Schlangen, 2017). The goal of this work is to learn a model of referential word meaning that makes accurate object naming predictions and goes beyond treating words as independent, mutually exclusive labels in a flat classification scheme. We extend upon work on learning models of referential word use from corpora of images paired with referring expressions (Schlangen et al., 2016; Zarrieß and Schlangen, 2017) that treats words as individual predictors capturing referential appropriateness. We explore different ways of linking these predictors to distributional knowledge, during application and during training. We find that these different models achieve very similar performance in a standard object naming task, though experiments on model combination suggest that they capture complementary aspects of referential meaning. In a zero-shot setup of an object naming task, we find that combining lexical and visual information during training is most beneficial, outperforming variants of cross-modal transfer. 2 Related Work Grounding and Reference An early example for work in REG that goes beyond Dale and Reiter (1995)’s dominant symbolic paradigm is Deb Roy’s work from the early 2000s (Roy et al., 2002; Roy, 2002, 2005). Roy et al. (2002) use computer vision techniques to process a video feed, and to compute colour, positional and spatial features. These features are then associated in a learning process with certain words, resulting in an association of colour features with colour words, spatial features with prepositions, etc., and based on this, these words can be interpreted with reference to the scene currently presented to the video feed. Whereas Roy’s work still looked at relatively simple scenes with graphical objects, research on REG has recently started to investigate set-ups based on real-world images (Kazemzadeh et al., 2014; Gkatzia et al., 2015; Zarrieß and Schlangen, 2016; Mao et al., 2015). Importantly, the lowlevel visual features that can be extracted from these scenes correspond less directly to particular word classes. Moreover, the visual scenes contain many different types of objects, which poses new challenges for REG. For instance, Zarrieß and Schlangen (2016) find that semantic errors related to mismatches between nouns (e.g. the system generates tree vs. man) are particularly disturbing for users. Whereas Zarrieß and Schlangen (2016) propose a strategy to avoid object names when the systems confidence is low, we focus on improving the generation of object names, using distributional knowledge as an additional source. Similarly, Ordonez et al. (2016) have studied the problem of deriving appropriate object names, or so-called entry-level categories, from the output of an object recognizer. Their approach focusses on linking abstract object categories in ImageNet 244 to actual words via various translation procedures. We are interested in learning referential appropriateness and extensional word meanings directly from actual human referring expressions (REs) paired with objects in images, using an existing object recognizer for feature extraction. Multi-modal distributional semantics Distributional semantic models are a well-known method for capturing lexical word meaning in a variety of tasks (Turney and Pantel, 2010; Mikolov et al., 2013; Erk, 2016). Recent work on multimodal distributional vector spaces (Feng and Lapata, 2010; Silberer and Lapata, 2014; Kiela and Bottou, 2014; Lazaridou et al., 2015b; Kottur et al., 2016) has aimed at capturing semantic similarity even more accurately by integrating distributional and perceptual features associated with words (mostly taken from images) into a single representation. Cross-modal transfer Rather than fusing different modalities into a single, joint space, other work has looked at cross-modal mapping between spaces. Herbelot and Vecchi (2015) present a model that learns to map vectors in a distributional space to vectors in a set-theoretic space, showing that there is a functional relationship between distributional information and conceptual knowledge representing quantifiers and predicates. More related to our work are cross-modal mapping models,that learn to transfer from a representation of an object or image in the visual space to a vector in a distributional space (Socher et al., 2013; Frome et al., 2013; Norouzi et al., 2013; Lazaridou et al., 2014). Here, the motivation is to exploit the rich lexical knowledge encoded in a distributional space for learning visual classifications. In practice, these models are mostly used for zeroshot learning where the test set contains object categories not observed during training. When tested on standard object recognition tasks, transfer, however, comes at a price. Frome et al. (2013) and Norouzi et al. (2013) both find that it slightly degrades performance as compared to a plain object classification using standard accuracy metrics (called flat “hit @k metric” in their paper). Interestingly though, Frome et al. (2013) report better performance using “hierarchical precision”, which essentially means that transfer predicts words that are ontologically closer to the gold label and makes “semantically more reasonable errors”. To the best of our knowledge, this pattern has not been systematically investigated any further. Another known problem with cross-modal transfer is that it seems to generalize less well than expected, i.e. tends to reproduce word vectors observed during training (Lazaridou et al., 2015a). In this work, we present a model that exploits distributional knowledge for learning referential word meaning as well, but explore and compare different ways of combining visual and lexical aspects of referential word meaning. 3 Task and Data We define object naming as follows: Given an object x in an image, the task is to predict a word w that could be used as the head noun of a realistic referring expression. (Cf. discussion above: “bird” when naming a robin, but “penguin” when naming a penguin.) To get at this, we develop our approach using a corpus of referring expressions produced by human users under natural, interactive conditions (Kazemzadeh et al., 2014), and train and test on the corresponding head nouns in these REs. This is similar to picture naming setups used in psycholinguistic research (cf. Levelt et al. (1991)) and based on the simplifying assumption that the name used for referring to an object can be determined successfully without looking at other objects in the image. We now summarise the details of our setup: Corpus We train and test on the REFERIT corpus (Kazemzadeh et al., 2014), which is based on the SAIAPR image collection (Grubinger et al., 2006) (99.5k image regions;120K REs). We follow (Schlangen et al., 2016) and select words with a minimum frequency of 40 in these two data sets, which gives us a vocabulary of 793 words. Names For most of our experiments, we only use a subset of this vocabulary, namely the set of object names. As the REs contain nouns that cannot be considered to be object names (background, bottom, etc.), we extract a list of names from the semantically annotated held-out set released with the REFERIT. These correspond to ‘entry-level’ nouns mentioned in Kazemzadeh et al. (2014). This gives us a list of 159 names. This set corresponds to the majority of object names in the corpus: out of the 99.5K available image regions, we use 80K for training and testing. Thus, our experiments are on a smaller scale as compared 245 to (Ordonez et al., 2016). Nevertheless, the data is challenging, as the corpus contains references to objects that fall outside of the object labeling scheme that available object recognition systems are typically optimized for, cf. Hu et al. (2015)’s discussion on “stuff” entities such “sky” or “grass” in the REFERIT data. For testing, we remove relational REs (containing a relational preposition such as ‘left of X’), because here we cannot be sure that the head noun of the target is fully informative; we also remove REs with more than one head noun from our list (i.e. these are mostly relational expressions as well such as ‘girl laughing at boy’). We pair each image region from the test set with its corresponding names from the remaining REs. Image and Word Embeddings Following Schlangen et al. (2016), we derive representations of our visual inputs with a convolutional neural network, ‘GoogleNet’ (Szegedy et al., 2015), which was trained on the ImageNet corpus (Deng et al., 2009), and extract the final fully-connected layer before the classification layer, to give us a 1024 dimensional representation of the region. We add 7 features that encode information about the region relative to the image, thus representing each object as a vector of 1031 features. As distributional word vectors, we use the word2vec representations provided by Baroni et al. (2014) (trained with CBOW, 5-word context window, 10 negative samples, 400 dimensions). 4 Three Models of Interfacing Visual and Distributional Information 4.1 Direct Cross-Modal Mapping Following Lazaridou et al. (2014), referential meaning can be represented as a translation function that projects visual representations of objects to linguistic representations of words in a distributional vector space. Thus, in contrast to standard object recognition systems or the other models we will use here, cross-modal mapping does not treat words as individual labels or classifiers, but learns to directly predict continuous representations of words in a vector space, such as the space defined by the word2vec embeddings that we use in this work. This model will be called TRANSFER below. During training, we pair each object with the distributional embedding of its name, and use standard Ridge regression for learning the transformation. Lazaridou et al. (2014) and Lazaridou et al. (2015a) test a range of technical tweaks and different algorithms for cross-modal mapping. For ease of comparison with other models, we stick with simple Ridge Regression in this work. For decoding, we map an object into the distributional space, and retrieve the nearest neighbors of the predicted vector using cosine similarity. In theory, the model should generalize easily to words that it has not observed in a pair with an object during training as it can map an object anywhere in the distributional space. 4.2 Lexical Mapping Through Individual Word Classifiers Another approach is to keep visual and distributional information separate, by training a separate visual classifier for each word w in the vocabulary. Predictions can then be mapped into distributional space during application time via the vectors of the predicted words. Here, we use Schlangen et al. (2016)’s WAC model, building the training set for each word w as follows: all visual objects in a corpus that have been referred to as w are used as positive instances, the remaining objects as negative instances. Thus, the classifiers learn to predict referential appropriateness for individual words based on the visual features of the objects they refer to, in isolation of other words. During decoding, we apply all word classifiers from the model’s vocabulary to the given object, and take the argmax over the individual word probabilities. The model predicts names directly, without links into a distributional space. In order to extend the model’s vocabulary for zero-shot learning, we follow Norouzi et al. (2013) and associate the top n words with their corresponding distributional vector and compute the convex combination of these vectors. Then, in parallel to cross-modal mapping, we retrieve the nearest neighbors of the combined embedding from the distributional space. Thus, with this model, we use two different modes of decoding: one that projects into distributional space, one that only applies the available word classifiers. We did some small-scale experiments to find an optimal value for n, similar to Norouzi et al. (2013). In our case, performance started to decrease systematically with n > 10, but did not differ significantly for values below 10. In Section 5, we will report results for n set to 5 and 10. 246 4.3 Word Prediction via Cross-Modal Similarity Mapping Finally, we implement an approach that combines ideas from cross-modal mapping with the WAC model: we train individual predictors for each word in the vocabulary, but, during training, we exploit lexical similarity relations encoded in a distributional space. Instead of treating a word as a binary classifier, we annotate its training instances with a fine-grained similarity signal according to their object names. When building the training set for such a word predictor w, instead of simply dividing objects into w and ¬w instances, we label each object with a real-valued similarity obtained from cosine similarity between w and v in a distributional vector space, where v is the word that was used to refer to the object. Thus, we task the model with jointly learning similarities and referential appropriateness, by training it with Ridge regression on a continuous output space. Object instances where v = w (i.e., the positive instances in the binary setup) have maximal similarity; the remaining instances have a lower value which is more or less close to maximal similarity. This is the SIM-WAP model, recently proposed in Zarrieß and Schlangen (2017). Importantly, and going beyond Zarrieß and Schlangen (2017), this model allows for an innovative treatment of words that only exist in a distributional space (without being paired with visual referents in the image corpus): as the predictors are trained on a continuous output space, no genuine positive instances of a word’s referent are needed. When training a predictor for such a word w, we use all available objects from our corpus and annotate them with the expected lexical similarity between w and the actual object names v, which for all objects will be below the maximal value that marks genuine positive instances. During decoding, this model does not need to project its predictions into a distributional space, but it simply applies all available predictors to the object, and takes the argmax over the predicted referential appropriateness scores. 5 Experiment 1: Naming Objects This Section reports on experiments in a standard setup of the object naming task where all object names are paired with visual instances of their referents during training. In a comparable task, i.e. object recognition with known object categories, cross-modal projection or transfer approaches have been reported to perform worse than standard object classification methods (Frome et al., 2013; Norouzi et al., 2013). This seems to suggest that lexical or at least distributional knowledge is detrimental when learning what a word refers to in the real world and that referential meaning should potentially be learned from visual object representation only. 5.1 Model comparison Setup We use the train/test split of REFERIT data as in (Schlangen et al., 2016). We consider image regions with non-relational referring expressions that contain at least one of the 159 head nouns from the list of entry-level nouns (see section 3). This amounts to 6208 image regions for testing and 73K instances for training. Results Table 1 shows accuracies in the object naming task for the TRANSFER, WAC and SIMWAP models according to their accuracies in the top n, including two variants of WAC where its top 5 and top 10 predictions are projected into the distributional space. Overall, the models achieve very similar performance. However, there is an interesting pattern when comparing accuracies @1 and @2 to accuracies in the top 5 predictions. Thus, looking at accuracies for the top (two) predictions, the various models that link referential meaning to word representations in the distributional space all perform slightly worse than the plain WAC model, i.e. individual word classifiers trained on visual features only. This might suggest that certain aspects of referential word meaning are learned less accurately when mapping from visual to distributional space (which replicates results reported in the literature on standard object recognition benchmarks). On the other hand, the SIM-WAP model is on a par with WAC in terms of the @5 accuracy. This effect suggests that distributional knowledge that SIM-WAP has access to during training sometimes distracts the model from predicting the exact name chosen by a human speaker, but that SIM-WAP is still able to rank it among the most probable names. As a simple accuracy-based evaluation is not suited to fully explain this pattern, we carry out a more detailed analysis in Section 5.3. 247 hit @k(%) @1 @2 @5 transfer 48.34 60.49 74.89 wac 49.34 61.86 75.35 wac, project top5 48.73 61.10 74.07 wac, project top10 48.68 61.23 74.31 sim-wap 48.13 60.60 75.40 Table 1: Accuracies in object naming hit @k(%) 1 5 10 sim-wap + transfer 49.10 61.78 75.81 sim-wap + wac 51.10 63.45 77.92 transfer + wac 51.13 63.76 77.84 wac + transfer + sim-wap 52.19 64.71 78.40 Table 2: Object naming acc., combined models 5.2 Model combination In order to get more insight into why the TRANSFER and SIM-WAP models produce slightly worse results than individual visual word classifiers, we now test to what extent the different models are complementary and combine them by aggregating over their naming predictions. If the models are complementary, their combination should lead to more confident and accurate naming decisions. Setup We combine TRANSFER, SIM-WAP and WAC by aggregating the scores they predict for different object names for a given object. During testing, we apply all models to an image region and consider words ranked among the top 10. We first normalize the referential appropriateness scores in each top-10 list and then compute their sum. This aggregation scheme will give more weight to words that appear in the top 10 list of different models, and less weight to words that only get top-ranked by a single model. We test on the same data as in Section 5.1. Results Table 2 shows that all model combinations improve over the results of their isolated models in Table 1, suggesting that WAC, TRANSFER and SIM-WAP indeed do capture complementary aspects of referential word meaning. On their own, the distributionally informed models are less tuned to specific word occurrences than the visual word classifiers in the WAC model, but they can add useful information which leads to a clear overall improvement. We take this as a promising finding, supporting our initial hypothesis that knowledge on lexical distributional meaning should and Av. cosine similarity among top k gold - top k 5 10 5 10 transfer 0.32 0.27 0.28 0.25 wac 0.18 0.20 0.18 0.16 sim-wap 0.32 0.26 0.28 0.25 Table 3: Cosine similarities between word2vec embeddings of nouns generated in the top k can be exploited when learning how to use words for reference. 5.3 Analysis Figure 2 illustrates objects from our test set where the combination of TRANSFER, SIM-WAP and WAC predicts an accurate name, whereas the models in isolation do not. These examples give some interesting insight into why the models capture different aspects of referential word meaning. Word Similarities Many of the examples in Figure 2 suggest that the object names ranked among the top 3 by the TRANSFER and SIMWAP model are semantically similar to each other, whereas WAC generates object names on top that describe very different underlying object categories, such as seal / rock in Figure 2(a), animal / lamp in Figure 2(g) or chair / shirt in Figure 2(c). To quantify this general impression, Table 3 shows cosine similarities among words in the top n generated by our models, using their word2vec embeddings. The average cosine similarity between words in our vocabulary is 0.17. The TRANSFER and SIM-WAP model rank words on top that are clearly more similar to each other than word pairs on average, whereas words ranked top by the WAC model are more dissimilar to each other. Another remarkable finding is that the words generated by TRANSFER and SIM-WAP are not only more similar among the top predictions, but also more similar to the gold name (Table 3 , right columns). This result is noteworthy since the accuracies for the top predictions shown in Table 1 are slightly below WAC. In general, this suggests that there is a trade-off between optimizing a model of referential word meaning to exact naming decisions, or tailoring it to make lexically consistent predictions. This parallels findings by Frome et al. (2013) who found that their transfer-based object recognition made “semantically more reasonable” errors than a standard convolutional network while 248 not improving accuracies for object recognition, see discussion in Section 2. Additional evaluation metrics, such as success rates in a human evaluation (cf. Zarrieß and Schlangen (2016)), would be an interesting direction for more detailed investigation here. Word Use But even though the WAC classifiers lack knowledge on lexical similarities, they seem to able to detect relatively specific instances of word use such as hut in Figure 2(b), shirt in 2(c) or lamp in 2(h). Here, the combination with TRANSFER and SIM-WAP is helpful to give more weight to the object name that is taxonomically correct (sometimes pushing up words below the top-3 and hence not shown in Figure 2). In Figure 1(e), SIMWAP and TRANSFER give more weight to typical names for persons, whereas WAC top-ranks more unusual names, reflecting that the person is difficult to identify visually. Another observation is that the mapping models have difficulties dealing with object names in singular and plural. As these words have very similar representations in the distributional space, they are often predicted as likely variants among the top 10 by SIM-WAP and TRANSFER, whereas the WAC model seems to predict inappropriate plural words less often among the top 3. Such specific phenomena at the intersection of visual and semantic similarity have found very little attention in the literature. We will investigate them further in our Experiments on zeroshot naming in the following Section. 6 Zero-Shot Naming Zero-shot learning is an attractive prospect for REG from images, as it promises to overcome dependence on pairings of visual instances and natural names being available for all names, if visual/referential data can be generalised from other types of information. Previous work has looked at the feasibility of zero-shot learning as a function of semantic similarity or ontological closeness between unknown and known categories, and confirmed the intuition that the task is harder the less close unknown categories are to known ones (Frome et al., 2013; Norouzi et al., 2013). Our experiments on object naming in Section 5 suggest that lexical similarities encoded in a distributional space might not always fully carry over to referential meaning. This could constitute an additional challenge for zero-shot learning, as distributional similarities might be misleading when the model has to fully rely on them for learning referential word meanings. Therefore, the following experiments investigate the performance of our models in zero-shot naming as a function of the lexical relation between unknown and known object names, i.e. namely hypernyms and singular/plurals. Both relations are typically captured by distributional models of word meaning in terms of closeness in the vector space, but their visual and referential relation is clearly different. 6.1 Vocabulary Splits and Testsets Random As in previous work on zero-shot learning, we consider zero-shot naming for words of varying degrees of similarity. We randomly split our 159 names from Experiment 1 into 10 subsets. We train the models on 90% of the nouns (and all their visual instances in the image corpus) and test on the set of image regions that are named with words which the model did not observe during training. Results reported in Table 4 on the random test set correspond to averaged scores from cross-validation over the 10 splits. Hypernyms We manually split the model’s vocabulary into set of hypernyms (see Appendix A) and the remaining nouns. We train the models on those 84K image regions that where not named with a hypernym, and test on 8895 image regions that were named with a hypernym in the corpus. We checked that for each of these hypernyms, the vocabulary contains at least one or two names that can be considered as hyponyms, i.e. the model sees objects during training that are instances of vehicle for example, but never encounters actual uses of that name. This test set is particularly interesting from an REG perspective, as objects named with very general terms by human speakers are often difficult to describe with more common, but more specific terms, as is illustrated by the uses of structure and thingy in Figure 1. Singulars/Plurals We pick 68 words from our vocabulary that can be grouped into 34 singularplural noun pairs (see Appendix A). From each pair, we randomly include the singular or plural noun in the set of zero-shot nouns. Thus, we make sure that the model encounters singular and plural names during training, but it never encounters both variants of a name. This results training split of 23K image regions and a test split of 13825 instances. 249 (a) wac: seal, rock, water sim-wap: side, rock,rocks transfer: rocks, rock, water combination: rock (c) wac: chair, shirt, guy sim-wap: woman, man, girl transfer: door, woman, window combination: shirt (e) wac: chick, person, guy sim-wap: man, person, woman transfer: man, guy, girl combination: person (g) wac: animal, lamp, table sim-wap: man, girl, person transfer: man, clouds, cloud combination: person (b) wac: cactus, hut, mountain sim-wap: side, rock, mountain transfer: mountain, rocks, rock combination: hut (d) wac: roof, house, building sim-wap: building, house, trees transfer: building, house, trees combination: house (f) wac: bush, bushes, tree sim-wap: trees, tree, grass transfer: trees, tree, bushes combination: bushes (h) wac: post, light, lamp sim-wap: tree, sky, pole transfer: tree, sky, trees combination: lamp Figure 2: Examples from object naming experiment where model combination is accurate Zero-shot Model full vocab disjoint vocab names @1 @2 @5 @10 @1 @2 Random transfer 0.05 2.38 16.57 35.71 41.49 62.34 wac, project top10 0.00 4.42 21.16 39.17 38.03 58.07 wac, project top5 0.00 4.39 21.63 40.01 37.46 57.36 sim-wap 3.71 13.13 36.49 54.44 42.28 64.26 Hypernyms transfer 0.07 1.25 7.75 29.93 59.88 73.88 wac, project top10 0.00 3.01 15.55 36.99 50.51 66.33 wac, project top5 0.00 2.78 16.75 38.13 47.73 64.38 sim-wap 3.16 10.33 31.14 49.62 57.55 70.15 Singulars/Plurals transfer 0.01 22.84 44.30 72.85 34.56 51.79 wac, project top10 0.00 22.21 43.43 68.95 31.46 48.76 wac, project top5 0.00 22.18 43.93 69.33 31.46 48.88 sim-wap 15.39 34.73 56.62 77.32 37.24 54.02 Table 4: Accuracies in zero-shot object naming on different vocabulary splits 250 6.2 Evaluation Some previous work on zero-shot image labeling assumes additional components that first identify whether an image should be labelled by a known or unknown word (Frome et al., 2013). We follow Lazaridou et al. (2014) and let the model decide whether to refer to an object by a known or unknown name. Related to that, distinct evaluation procedures have been used in the literature on zero-shot learning: Testing on full vocabulary A realistic way to test zero-shot learning performance is to consider all words from a given vocabulary during testing, though the testset only contains instances of objects that have been named with a ‘zero-shot word’ (for which no visual instances were seen during training). Accuracies in this setup reflect how well the model is able to generalize, i.e. how often it decides to deviate from the words it was trained on, and (implicitly) predicts that the given object requires a “new” name. In case of the (i) hypernym and (ii) singular/plural test set, this accuracy also reflects to what extent the model is able to detect cases where (i) a more general or vague term is needed, where (ii) an unknown singular/plural counterpart of a known object type occurs. Testing on disjoint vocabulary Alternatively, the model’s vocabulary can be restricted during testing to zero-shot words only, such that names encountered during training and testing are disjoint, see e.g. (Lampert et al., 2009, 2013). This setup factors out the generalization problem, and assesses to what extent a model is able to capture the referential meaning of a word that does not have instances in the training data. 6.3 Results As compared to Experiment 1 where models achieved similar performance, differences are more pronounced in the zero-shot setup, as shown in Table 4. In particular, we find that the SIMWAP model which induces individual predictors for words that have not been observed in the training data is clearly more successful than TRANSFER or WAC that project predictions into the distributional space. When tested on the full vocabulary, we find that TRANSFER and WAC very rarely generate names whose referents were excluded from training, which is in line with observations made by Lazaridou et al. (2015a). The SIM-WAP predictors generalize much better, in particular on the singular/plural testset. An interesting exception is the good performance of the TRANSFER model on the hypernym test set, when evaluated with a disjoint vocabulary. This corroborates evidence from Experiment 1, namely that the transfer model captures taxonomic aspects of object names better than the other models. Projection via individual word classifiers, on the other hand, seems to generalize better than TRANSFER, at least when looking at accuracies @2 ... @10. Thus, combining several vectors predicted by a model of referential word meaning can provide additional information, as compared to mapping an object to a single vector in distributional space. More work is needed to establish how these approaches can be integrated more effectively. 7 Discussion and Conclusion In this paper, we have investigated models of referential word meaning, using different ways of combining visual information about a word’s referent and distributional knowledge about its lexical similarities. Previous cross-modal mapping models essentially force semantically similar objects to be mapped into the same area in the semantic space regardless of their actual visual similarity. We found that cross-modal mapping produces semantically appropriate and mutually highly similar object names in its top-n list, but does not preserve differences in referential word use (e.g. appropriatness of person vs. woman) especially within the same semantic field. We have shown that it is beneficial for performance in standard and zeroshot object naming to treat words as individual predictors that capture referential appropriateness and are only indirectly linked to a distributional space, either through lexical mapping during application or through cross-modal similarity mapping during training. As we have tested these approaches on a rather small vocabulary, which may limit generality of conclusions, future work will be devoted to scaling up these findings to larger test sets, as e.g. recently collected through conversational agents (Das et al., 2016) that circumvent the need for human-human interaction data. Also from an REG perspective, various extensions of this approach are possible, such as the inclusion of contextual information during object naming and its combination with attribute selection. 251 Acknowledgments We acknowledge support by the Cluster of Excellence “Cognitive Interaction Technology” (CITEC; EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG). We thank the anonymous reviewers for their very valuable, very detailed and highly interesting comments. References Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 238–247. http://www.aclweb.org/anthology/P14-1023. Robert Dale and Ehud Reiter. 1995. Computational interpretations of the gricean maxims in the generation of referring expressions. Cognitive Science 19(2):233–263. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e M. F. Moura, Devi Parikh, and Dhruv Batra. 2016. Visual dialog. CoRR abs/1611.08669. http://arxiv.org/abs/1611.08669. Jia Deng, Nan Ding, Yangqing Jia, Andrea Frome, Kevin Murphy, Samy Bengio, Yuan Li, Hartmut Neven, and Hartwig Adam. 2014. Large-scale object classification using label relation graphs. In European Conference on Computer Vision. Springer, pages 48–64. Jia Deng, W. Dong, Richard Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09. Katrin Erk. 2016. What do you know about an alligator when you know the company it keeps? Semantics and Pragmatics 9(17):1–63. https://doi.org/10.3765/sp.9.17. Yansong Feng and Mirella Lapata. 2010. Visual information in semantic representation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 91–99. Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc Aurelio Ranzato, and Tomas Mikolov. 2013. Devise: A deep visualsemantic embedding model. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, Curran Associates, Inc., pages 2121–2129. Dimitra Gkatzia, Verena Rieser, Phil Bartie, and William Mackaness. 2015. From the virtual to the realworld: Referring to objects in real-world spatial scenes. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1936–1942. http://aclweb.org/anthology/D15-1224. Michael Grubinger, Paul Clough, Henning M¨uller, and Thomas Deselaers. 2006. The IAPR TC-12 benchmark: a new evaluation resource for visual information systems. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2006). Genoa, Italy, pages 13–23. Aur´elie Herbelot and Eva Maria Vecchi. 2015. Building a shared world: mapping distributional to modeltheoretic semantic spaces. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 22–32. http://aclweb.org/anthology/D15-1003. Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. 2015. Natural language object retrieval. CoRR abs/1511.04164. http://arxiv.org/abs/1511.04164. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara L Berg. 2014. ReferItGame: Referring to Objects in Photographs of Natural Scenes. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2014). Doha, Qatar, pages 787–798. Douwe Kiela and L´eon Bottou. 2014. Learning image embeddings using convolutional neural networks for improved multi-modal semantics. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 36– 45. http://www.aclweb.org/anthology/D14-1005. Satwik Kottur, Ramakrishna Vedantam, Jos´e MF Moura, and Devi Parikh. 2016. Visual word2vec (vis-w2v): Learning visually grounded word embeddings using abstract scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 4985–4994. Emiel Krahmer and Kees Van Deemter. 2012. Computational generation of referring expressions: A survey. Computational Linguistics 38(1):173–218. Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. 2009. Learning to detect unseen object classes by between-class attribute transfer. In IEEE Computer Vision and Pattern Recognition. IEEE, pages 951–958. Christoph H. Lampert, Hannes Nickisch, and Stefan Harmeling. 2013. Attribute-based classification for zero-shot visual object categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(3):453–465. 252 Angeliki Lazaridou, Elia Bruni, and Marco Baroni. 2014. Is this a wampimuk? Cross-modal mapping between distributional semantics and the visual world. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pages 1403–1414. Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. 2015a. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 270–280. http://www.aclweb.org/anthology/P15-1027. Angeliki Lazaridou, Nghia The Pham, and Marco Baroni. 2015b. Combining language and vision with a multimodal skip-gram model. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 153–163. http://www.aclweb.org/anthology/N151016. Willem JM Levelt, Herbert Schriefers, Dirk Vorberg, Antje S Meyer, Thomas Pechmann, and Jaap Havinga. 1991. The time course of lexical access in speech production: A study of picture naming. Psychological review 98(1):122. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2015. Generation and comprehension of unambiguous object descriptions. ArXiv / CoRR abs/1511.02283. http://arxiv.org/abs/1511.02283. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems. Curran Associates Inc., USA, NIPS’13, pages 3111–3119. http://dl.acm.org/citation.cfm?id=2999792.2999959. Mohammad Norouzi, Tomas Mikolov, Samy Bengio, Yoram Singer, Jonathon Shlens, Andrea Frome, Greg S Corrado, and Jeffrey Dean. 2013. Zero-shot learning by convex combination of semantic embeddings. International Conference on Learning Representations (ICLR) . Vicente Ordonez, Wei Liu, Jia Deng, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. 2016. Learning to name objects. Commun. ACM 59(3):108–115. Eleanor Rosch. 1978. Principles of Categorization. In Eleanor Rosch and Barbara B. Lloyd, editors, Cognition and Categorization, Lawrence Erlbaum, Hillsdale, N.J., USA, pages 27—-48. Deb Roy. 2005. Grounding words in perception and action: Computational insights. Trends in Cognitive Sciene 9(8):389–396. Deb Roy, Peter Gorniak, Niloy Mukherjee, and Josh Juster. 2002. A trainable spoken language understanding system for visual object selection. In Proceedings of the International Conference on Speech and Language Processing 2002 (ICSLP 2002). Colorado, USA. Deb K. Roy. 2002. Learning visually-grounded words and syntax for a scene description task. Computer Speech and Language 16(3). David Schlangen, Sina Zarriess, and Casey Kennington. 2016. Resolving references to objects in photographs using the words-as-classifiers model. In Proceedings of the 54rd Annual Meeting of the Association for Computational Linguistics (ACL 2016). Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 721– 732. http://www.aclweb.org/anthology/P14-1068. Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Advances in neural information processing systems. pages 935–943. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In CVPR 2015. Boston, MA, USA. Peter D Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research 37(1):141–188. Sina Zarrieß and David Schlangen. 2016. Easy things first: Installments improve referring expression generation for objects in photographs. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 610–620. http://www.aclweb.org/anthology/P16-1058. Sina Zarrieß and David Schlangen. 2017. Is this a child, a girl or a car? exploring the contribution of distributional similarity to learning referential word meanings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Association for Computational Linguistics, pages 86–91. http://aclweb.org/anthology/E17-2014. 253 A Vocabulary Splits for Zero-Shot Naming Hypernyms animal, animals, plant, plants, vehicle, person, persons, food, thing, object, area, things, thingy, toy, anyone, clothes, dish, building, land, structure, item, water Singulars/Plurals . . . . . . training on instances of: animals, plants, cars, people, buildings, trees, man, kid, guy, girl, boy, flower, bird, hill, orange, cloud, curtain, window, shrub, apple, light, house, glass, bottle, dude, leg, book, wall, bananas, carrots, pillows, bushes, mountains, bags . . . testing on instances of: animal, plant, car, person, building, tree, men, kids, guys, girls, boys, flowers, birds, hills, oranges, clouds, curtains, windows, shrubs, apples, lights, houses, glasses, bottles, dudes, legs, books, walls, banana, carrot, pillow, bush, mountain, bag 254
2017
23
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 255–265 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1024 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 255–265 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1024 FOIL it! Find One mismatch between Image and Language caption Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aur´elie Herbelot, Moin Nabi, Enver Sangineto, Raffaella Bernardi University of Trento {firstname.lastname}@unitn.it Abstract In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities. To this end, we propose an extension of the MSCOCO dataset, FOIL-COCO, which associates images with both correct and ‘foil’ captions, that is, descriptions of the image that are highly similar to the original ones, but contain one single mistake (‘foil word’). We show that current LaVi models fall into the traps of this data and perform badly on three tasks: a) caption classification (correct vs. foil); b) foil word detection; c) foil word correction. Humans, in contrast, have near-perfect performance on those tasks. We demonstrate that merely utilising language cues is not enough to model FOIL-COCO and that it challenges the state-of-the-art by requiring a fine-grained understanding of the relation between text and image. 1 Introduction Most human language understanding is grounded in perception. There is thus growing interest in combining information from language and vision in the NLP and AI communities. So far, the primary testbeds of Language and Vision (LaVi) models have been ‘Visual Question Answering’ (VQA) (e.g. Antol et al. (2015); Malinowski and Fritz (2014); Malinowski et al. (2015); Gao et al. (2015); Ren et al. (2015)) and ‘Image Captioning’ (IC) (e.g. Hodosh et al. (2013); Fang et al. (2015); Chen and Lawrence Zitnick (2015); Donahue et al. (2015); Karpathy and Fei-Fei (2015); Vinyals et al. (2015)). Whilst some models have seemed extremely successful on those tasks, it remains unclear how the reported results should be interpreted and what those Figure 1: Is the caption correct or foil (T1)? If it is foil, where is the mistake (T2) and which is the word to correct the foil one (T3)? models are actually learning. There is an emerging feeling in the community that the VQA task should be revisited, especially as many current dataset can be handled by ‘blind’ models which use language input only, or by simple concatenation of language and vision features (Agrawal et al., 2016; Jabri et al., 2016; Zhang et al., 2016; Goyal et al., 2016a). In IC too, Hodosh and Hockenmaier (2016) showed that, contrarily to what prior research had suggested, the task is far from been solved, since IC models are not able to distinguish between a correct and incorrect caption. Such results indicate that in current datasets, language provides priors that make LaVi models successful without truly understanding and integrating language and vision. But problems do not stop at biases. Johnson et al. (2016) also point out that current data ‘conflate multiple sources of error, making it hard to pinpoint model weaknesses’, thus highlighting the need for diagnostic datasets. Thirdly, existing IC evaluation metrics are sensitive to n-gram overlap and there is a need for measures that better simulate human judgments (Hodosh et al., 2013; Elliott and Keller, 2014; Anderson et al., 2016). Our paper tackles the identified issues by proposing an automatic method for creating a 255 large dataset of real images with minimal language bias and some diagnostic abilities. Our dataset, FOIL (Find One mismatch between Image and Language caption),1 consists of images associated with incorrect captions. The captions are produced by introducing one single error (or ‘foil’) per caption in existing, human-annotated data (Figure 1). This process results in a challenging error-detection/correction setting (because the caption is ‘nearly’ correct). It also provides us with a ground truth (we know where the error is) that can be used to objectively measure the performance of current models. We propose three tasks based on widely accepted evaluation measures: we test the ability of the system to a) compute whether a caption is compatible with the image (T1); b) when it is incompatible, highlight the mismatch in the caption (T2); c) correct the mistake by replacing the foil word (T3). The dataset presented in this paper (Section 3) is built on top of MS-COCO (Lin et al., 2014), and contains 297,268 datapoints and 97,847 images. We will refer to it as FOIL-COCO. We evaluate two state-of-the-art VQA models: the popular one by Antol et al. (2015), and the attention-based model by Lu et al. (2016), and one popular IC model by (Wang et al., 2016). We show that those models perform close to chance level, while humans can perform the tasks accurately (Section 4). Section 5 provides an analysis of our results, allowing us to diagnose three failures of LaVi models. First, their coarse representations of language and visual input do not encode suitably structured information to spot mismatches between an utterance and the corresponding scene (tested by T1). Second, their language representation is not finegrained enough to identify the part of an utterance that causes a mismatch with the image as it is (T2). Third, their visual representation is also too poor to spot and name the visual area that corresponds to a captioning error (T3). 2 Related Work The image captioning (IC) and visual question answering (VQA) tasks are the most relevant to our work. In IC (Fang et al., 2015; Chen and Lawrence Zitnick, 2015; Donahue et al., 2015; Karpathy and Fei-Fei, 2015; Vinyals et al., 2015; 1The dataset is available from https://foilunitn. github.io/ Wang et al., 2016), the goal is to generate a caption for a given image, such that it is both semantically and syntactically correct, and properly describes the content of that image. In VQA (Antol et al., 2015; Malinowski and Fritz, 2014; Malinowski et al., 2015; Gao et al., 2015; Ren et al., 2015), the system attempts to answer open-ended questions related to the content of the image. There is a wealth of literature on both tasks, but we only discuss here the ones most related to our work and refer the reader to the recent surveys by (Bernardi et al., 2016; Wu et al., 2016). Despite their success, it remains unclear whether state-of-the-art LaVi models capture vision and language in a truly integrative fashion. We could identify three types of arguments surrounding the high performance of LaVi models: (i) Triviality of the LaVi tasks: Recent work has shown that LaVi models heavily rely on language priors (Ren et al., 2015; Agrawal et al., 2016; Kafle and Kanan, 2016). Even simple correlation and memorisation can result in good performance, without the underlying models truly understanding visual content (Zhou et al., 2015; Jabri et al., 2016; Hodosh and Hockenmaier, 2016). Zhang et al. (2016) first unveiled that there exists a huge bias in the popular VQA dataset by Antol et al. (2015): they showed that almost half of all the questions in this dataset could be answered correctly by using the question alone and ignoring the image completely. In the same vein, Zhou et al. (2015) proposed a simple baseline for the task of VQA. This baseline simply concatenates the Bag of Words (BoW) features from the question and Convolutional Neural Networks (CNN) features from the image to predict the answer. They showed that such a simple method can achieve comparable performance to complex and deep architectures. Jabri et al. (2016) proposed a similar model for the task of multiple choice VQA, and suggested a cross-dataset generalization scheme as an evaluation criterion for VQA systems. We complement this research by introducing three new tasks with different levels of difficulty, on which LaVi models can be evaluated sequentially. (ii) Need for diagnostics: To overcome the bias uncovered in previous datasets, several research groups have started proposing tasks which involve distinguishing distractors from a groundtruth caption for an image. Zhang et al. (2016) introduced a binary VQA task along with a dataset 256 composed of sets of similar artificial images, allowing for more precise diagnostics of a system’s errors. Goyal et al. (2016a) balanced the dataset of Antol et al. (2015), collecting a new set of complementary natural images which are similar to existing items in the original dataset, but result in different answers to a common question. Hodosh and Hockenmaier (2016) also proposed to evaluate a number of state-of-the-art LaVi algorithms in the presence of distractors. Their evaluation was however limited to a small dataset (namely, Flickr30K (Young et al., 2014)) and the caption generation was based on a hand-crafted scheme using only inter-dataset distractors. Most related to our paper is the work by Ding et al. (2016). Like us, they propose to extend the MS-COCO dataset by generating decoys from human-created image captions. They also suggest an evaluation apparently similar to our T1, requiring the LaVi system to detect the true target caption amongst the decoys. Our efforts, however, differ in some substantial ways. First, their technique to create incorrect captions (using BLEU to set an upper similarity threshold) is so that many of those captions will differ from the gold description in more than one respect. For instance, the caption two elephants standing next to each other in a grass field is associated with the decoy a herd of giraffes standing next to each other in a dirt field (errors: herd, giraffe, dirt) or with animals are gathering next to each other in a dirt field (error: dirt; infelicities: animals and gathering, which are both pragmatically odd). Clearly, the more the caption changes in the decoy, the easier the task becomes. In contrast, the foil captions we propose only differ from the gold description by one word and are thus more challenging. Secondly, the automatic caption generation of Ding et al means that ‘correct’ descriptions can be produced, resulting in some confusion in human responses to the task. We made sure to prevent such cases, and human performance on our dataset is thus close to 100%. We note as well that our task does not require any complex instructions for the annotation, indicating that it is intuitive to human beings (see §4). Thirdly, their evaluation is a multiple-choice task, where the system has to compare all captions to understand which one is closest to the image. This is arguably a simpler task than the one we propose, where a caption is given and the system is asked to classify it as correct or foil: as we show in §4, detecting a correct caption is much easier than detecting foils. So evaluating precision on both gold and foil items is crucial. Finally, (Johnson et al., 2016) proposed CLEVR, a dataset for the diagnostic evaluation of VQA systems. This dataset was designed with the explicit goal of enabling detailed analysis of different aspects of visual reasoning, by minimising dataset biases and providing rich ground-truth representations for both images and questions. (iii) Lack of objective evaluation metrics: The evaluation of Natural Language Generation (NLG) systems is known to be a hard problem. It is further unclear whether the quality of LaVi models should be measured using metrics designed for language-only tasks. Elliott and Keller (2014) performed a sentence-level correlation analysis of NLG evaluation measures against expert human judgements in the context of IC. Their study revealed that most of those metrics were only weakly correlated with human judgements. In the same line of research, Anderson et al. (2016) showed that the most widely-used metrics for IC fail to capture semantic propositional content, which is an essential component of human caption evaluation. They proposed a semantic evaluation metric called SPICE, that measures how effectively image captions recover objects, attributes and the relations between them. In this paper, we tackle this problem by proposing tasks which can be evaluated based on objective metrics for classification/detection error. 3 Dataset In this section, we describe how we automatically generate FOIL-COCO datapoints, i.e. image, original and foil caption triples. We used the training and validation Microsoft’s Common Objects in Context (MS-COCO) dataset (Lin et al., 2014) (2014 version) as our starting point. In MS-COCO, each image is described by at least five descriptions written by humans via Amazon Mechanical Turk (AMT). The images contains 91 common object categories (e.g. dog, elephant, bird, ... and car, bicycle, airplane, ...), from 11 supercategories (Animal, Vehicle, resp.), with 82 of them having more than 5K labeled instances. In total there are 123,287 images with captions (82,783 for training and 40,504 for validation).2 Our data generation process consists of four 2The MS-COCO test set is not available for download. 257 nr. of datapoints nr. unique images nr. of tot. captions nr. target::foil pairs Train 197,788 65,697 395,576 256 Test 99,480 32,150 198,960 216 Table 1: Composition of FOIL-COCO. main steps, as described below. The last two steps are illustrated in Figure 2. 1. Generation of replacement word pairs We want to replace one noun in the original caption (the target) with an incorrect but similar word (the foil). To do this, we take the labels of MSCOCO categories, and we pair together words belonging to the same supercategory (e.g., bicycle::motorcycle, bicycle::car, bird::dog). We use as our vocabulary 73 out of the 91 MS-COCO categories, leaving out those categories that are multiword expressions (e.g. traffic light). We thus obtain 472 target::foil pairs. 2. Splitting of replacement pairs into training and testing To avoid the models learning trivial correlations due to replacement frequency, we randomly split, within each supercategory, the candidate target::foil pairs which are used to generate the captions of the training vs. test sets. We obtain 256 pairs, built out of 72 target and 70 foil words, for the training set, and 216 pairs, containing 73 target and 71 foil words, for the test set. 3. Generation of foil captions We would like to generate foil captions by replacing only target words which refer to visually salient objects. To this end, given an image, we replace only those target words that occur in more than one MS-COCO caption associated with that image. Moreover, we want to use foils which are not visually present, i.e. that refer to visual content not present in the image. Hence, given an image, we only replace a word with foils that are not among the labels (objects) annotated in MS-COCO for that image. We use the images from the MS-COCO training and validation sets to generate our training and test sets, respectively. We obtain 2,229,899 for training and 1,097,012 captions for testing. 4. Mining the hardest foil caption for each image To eliminate possible visual-language dataset bias, out of all foil captions generated in step 3, we select only the hardest one. For this purpose, we need to model the visual-language bias of the dataset. To this end, we use Neuraltalk3 3https://github.com/karpathy/ neuraltalk (Karpathy and Fei-Fei, 2015), one of the stateof-the-art image captioning systems, pre-trained on MS-COCO. Neuraltalk is based on an LSTM which takes as input an image and generates a sentence describing its content. We obtain a neural network N that implicitly represents the visuallanguage bias through its weights. We use N to approximate the conditional probability of a caption C given a dataset T and and an image I (P(C|I, T)). This is obtained by simply using the loss l(C, N(I)) i.e., the error obtained by comparing the pseudo-ground truth C with the sentence predicted by N: P(C|I, T) = 1 −l(C, N(I)) (we refer to (Karpathy and Fei-Fei, 2015) for more details on how l() is computed). P(C|I, T) is used to select the hardest foil among all the possible foil captions, i.e. the one with the highest probability according to the dataset bias learned by N. Through this process, we obtain 197,788 and 99,480 original::foil caption pairs for the training and test sets, respectively. None of the target::foil word pairs are filtered out by this mining process. The final FOIL-COCO dataset consists of 297,268 datapoints (197,788 in training and 99,480 in test set). All the 11 MS-COCO supercategories are represented in our dataset and contain 73 categories from the 91 MS-COCO ones (4.8 categories per supercategory on average.) Further details are reported in Table 1. 4 Experiments and Results We conduct three tasks, as presented below: Task 1 (T1): Correct vs. foil classification Given an image and a caption, the model is asked to mark whether the caption is correct or wrong. The aim is to understand whether LaVi models can spot mismatches between their coarse representations of language and visual input. Task 2 (T2): Foil word detection Given an image and a foil caption, the model has to detect the foil word. The aim is to evaluate the understanding of the system at the word level. In order to systematically check the system’s performance with different prior information, we test two different set258 Figure 2: The main aspects of the foil caption generation process. Left column: some of the original COCO captions associated with an image. In bold we highlight one of the target words (bicycle), chosen because it is mentioned by more than one annotator. Middle column: For each original caption and each chosen target word, different foil captions are generated by replacing the target word with all possible candidate foil replacements. Right column: A single caption is selected amongst all foil candidates. We select the ‘hardest’ caption, according to Neuraltalk model, trained using only the original captions. tings: the foil has to be selected amongst (a) only the nouns or (b) all content words in the caption. Task 3 (T3): Foil word correction Given an image, a foil caption and the foil word, the model has to detect the foil and provide its correction. The aim is to check whether the system’s visual representation is fine-grained enough to be able to extract the information necessary to correct the error. For efficiency reasons, we operationalise this task by asking models to select a correction from the set of target words, rather than the whole dataset vocabulary (viz. more than 10K words). 4.1 Models We evaluate both VQA and IC models against our tasks. For the former, we use two of the three models evaluated in (Goyal et al., 2016a) against a balanced VQA dataset. For the latter, we use the multimodal bi-directional LSTM, proposed in (Wang et al., 2016), and adapted for our tasks. LSTM + norm I: We use the best performing VQA model in (Antol et al., 2015) (deeper LSTM + norm I). This model uses a two stack LongShort Term Memory (LSTM) to encode the questions and the last fully connected layer of VGGNet to encode images. Both image embedding and caption embedding are projected into a 1024dimensional feature space. Following (Antol et al., 2015), we have normalised the image feature before projecting it. The combination of these two projected embeddings is performed by a pointwise multiplication. The multi-model representation thus obtained is used for the classification, which is performed by a multi-layer perceptron (MLP) classifier. HieCoAtt: We use the Hierarchical CoAttention model proposed by (Lu et al., 2016) that co-attends to both the image and the question to solve the task. In particular, we evaluate the ‘alternate’ version, i.e. the model that sequentially alternates between generating some attention over the image and question. It does so in a hierarchical way by starting from the word-level, then going to the phrase and then to the entire sentence-level. These levels are combined recursively to produce the distribution over the foil vs. correct captions. IC-Wang: Amongst the IC models, we choose the multimodal bi-directional LSTM (Bi-LSTM) model proposed in (Wang et al., 2016). This model predicts a word in a sentence by considering both the past and future context, as sentences are fed to the LSTM in forward and backward order. The model consists of three modules: a CNN for encoding image inputs, a Text-LSTM (T-LSTM) for encoding sentence inputs, a Multimodal LSTM (M-LSTM) for embedding visual and textual vectors to a common semantic space and decoding to sentence. The bidirectional LSTM is implemented with two separate LSTM layers. 259 Baselines: We compare the SoA models above against two baselines. For the classification task, we use a Blind LSTM model followed by a fully connected layer and softmax and train it only on captions as input to predict the answer. In addition, we evaluate the CNN+LSTM model, where visual and textual features are simply concatenated. The models at work on our three tasks For the classification task (T1), the baselines and VQA models can be applied directly. We adapt the generative IC model to perform the classification task as follows. Given a test image I and a test caption, for each word wt in the test caption, we remove the word and use the model to generate new captions in which the wt has been replaced by the word vt predicted by the model (w1,...,wt−1, vt, wt−1,...,wn). We then compare the conditional probability of the test caption with all the captions generated from it by replacing wt with vt. When all the conditional probabilities of the generated captions are lower than the one assigned to the test caption the latter is classified as good, otherwise as foil. For the other tasks, the models have been trained on T1. To perform the foil word detection task (T2), for the VQA models, we apply the occlusion method. Following (Goyal et al., 2016b), we systematically occlude subsets of the language input, forward propagate the masked input through the model, and compute the change in the probability of the answer predicted with the unmasked original input. For the IC model, similarly to T1, we sequentially generate new captions from the foil one by replacing, one by one, the words in it and computing the conditional probability of the foil caption and the one generated from it. The word whose replacement generate the caption with the highest conditional probabilities is taken to be the foil word. Finally, to evaluate the models on the error correction task (T3), we apply the linear regression method over all the target words and select the target word which has the highest probability of making that wrong caption correct with respect to the given image. Upper-bound Using Crowdflower, we collected human answers from 738 native English speakers for 984 image-caption pairs randomly selected from the test set. Subjects were given an image and a caption and had to decide whether it was correct or wrong (T1). If they thought it was wrong, they were required to mark the error in the caption (T2). We collected 2952 judgements (i.e. 3 judgements per pair and 4 judgements per rater) and computed human accuracy in T1 when considering as answer (a) the one provided by at least 2 out of 3 annotators (majority) and (b) the one provided by all 3 annotators (unanimity). The same procedure was adopted for computing accuracies in T2. Accuracies in both T1 an T2 are reported in Table 2. As can be seen, in the majority setting annotators are quasi-perfect in classifying captions (92.89%) and detecting foil words (97.00%). Though lower, accuracies in the unanimity setting are still very high, with raters providing the correct answer in 3 out of 4 cases in both tasks. Hence, although we have collected human answers only on a rather small subset of the test set, we believe their results are representative of how easy the tasks are for humans. 4.2 Results As shown in Table 2, the FOIL-COCO dataset is challenging. On T1, for which the chance level is 50.00%, the ‘blind’, language-only model, does badly with an accuracy of 55.62% (25.04% on foil captions), demonstrating that language bias is minimal. By adding visual information, CNN+LSTM, the overall accuracy increases by 5.45% (7.94% on foil captions.) reaching 61.07% (resp. 32.98%). Both SoA VQA and IC models do significantly worse than humans on both T1 and T2. The VQA systems show a strong bias towards correct captions and poor overall performance. They only identify 34.51% (LSTM +norm I) and 36.38% (HieCoAtt) of the incorrect captions (T1). On the other hand, the IC model tends to be biased toward the foil captions, on which it achieves an accuracy of 45.44%, higher than the VQA models. But the overall accuracy (42.21%) is poorer than the one obtained by the two baselines. On the foil word detection task, when considering only nouns as possible foil word, both the IC and the LSTM+norm I models perform close to chance level, and the HieCoAtt performs somewhat better, reaching 38.79%. Similar results are obtained when considering all words in the caption as possible foil. Finally, the VQA models’ accuracy on foil word correction (T3) is extremely low, at 4.7% (LSTM +norm I) and 4.21% (HieCoAtt). The result on T3 makes it clear that the VQA systems are unable to extract from the image rep260 resentation the information needed to correct the foil: despite being told which element in the caption is wrong, they are not able to zoom into the correct part of the image to provide a correction, or if they are, cannot name the object in that region. The IC model performs better compared to the other models, having an accuracy that is 20,78% higher than chance level. T1: Classification task Overall Correct Foil Blind 55.62 86.20 25.04 CNN+LSTM 61.07 89.16 32.98 IC-Wang 42.21 38.98 45.44 LSTM + norm I 63.26 92.02 34.51 HieCoAtt 64.14 91.89 36.38 Human (majority) 92.89 91.24 94.52 Human (unanimity) 76.32 73.73 78.90 T2: Foil word detection task nouns all content words Chance 23.25 15.87 IC-Wang 27.59 23.32 LSTM + norm I 26.32 24.25 HieCoAtt 38.79 33.69 Human (majority) 97.00 Human (unanimity) 73.60 T3: Foil word correction task all target words Chance 1.38 IC-Wang 22.16 LSTM + norm I 4.7 HieCoAtt 4.21 Table 2: T1: Accuracy for the classification task, relatively to all image-caption pairs (overall) and by type of caption (correct vs. foil); T2: Accuracy for the foil word detection task, when the foil is known to be among the nouns only or when it is known to be among all the content words; T3: Accuracy for the foil word correction task when the correct word has to be chosen among any of the target words. 5 Analysis We performed a mixed-effect logistic regression analysis in order to check whether the behavior of the best performing models in T1, namely the VQA models, can be predicted by various linguistic variables. We included: 1) semantic similarity between the original word and the foil (computed as the cosine between the two corresponding word2vec embeddings (Mikolov et al., 2013)); 2) frequency of original word in FOIL-COCO captions; 3) frequency of the foil word in FOILCOCO captions; 4) length of the caption (number of words). The mixed-effect model was performed to get rid of possible effects due to either object supercategory (indoor, food, vehicle, etc.) or target::foil pair (e.g., zebra::giraffe, boat::airplane, etc.). For both LSTM + norm I and HieCoAtt, word2vec similarity, frequency of the original word, and frequency of the foil word turned out to be highly reliable predictors of the model’s response. The higher the values of these variables, the more the models tend to provide the wrong output. That is, when the foil word (e.g. cat) is semantically very similar to the original one (e.g. dog), the models tend to wrongly classify the caption as ‘correct’. The same holds for frequency values. In particular, the higher the frequency of both the original word and the foil one, the more the models fail. This indicates that systems find it difficult to distinguish related concepts at the textvision interface, and also that they may tend to be biased towards frequently occurring concepts, ‘seeing them everywhere’ even when they are not present in the image. Caption length turned out to be only a partially reliable predictor in the LSTM + norm I model, whereas it is a reliable predictor in HieCoAtt. In particular, the longer the caption, the harder for the model to spot that there is a foil word that makes the caption wrong. As revealed by the fairly high variance explained by the random effect related to target::foil pairs in the regression analysis, both models perform very well on some target::foil pairs, but fail on some others (see leftmost part of Table 4 for same examples of easy/hard target::foil pairs). Moreover, the variance explained by the random effect related to object supercategory is reported in Table 3. As can be seen, for some supercategories accuracies are significatively higher than for others (compare, e.g., ‘electronic’ and ‘outdoor’). In a separate analysis, we also checked whether there was any correlation between results and the position of the foil in the sentence, to ensure the models did not profit from any undesirable artifacts of the data. We did not find any such correlation. 261 Super-category No. of object No. of foil captions Acc. using LSTM + norm I Acc. using HieCoAtt outdoor 2 107 2.80 0.93 food 9 10407 22.00 26.59 indoor 6 4911 30.74 27.97 appliance 5 2811 32.72 34.54 sports 10 16276 31.57 31.61 animal 10 21982 39.03 43.18 vehicle 8 16514 34.38 40.09 furniture 5 13625 33.27 33.13 accessory 5 3040 49.53 31.80 electronic 6 5615 45.82 43.47 kitchen 7 4192 38.19 45.34 Table 3: Classification Accuracy of foil captions by Super Categories (T1). The No. of the objects and the No. of foil captions refer to the test set. The training set has a similar distribution. Top-5 Bottom-5 T1: LSTM + norm I racket::glove 100 motorcycle::airplane 0 racket::kite 97.29 bicycle::airplane 0 couch::toilet 97.11 drier::scissors 0 racket::skis 95.23 bus::airplane 0.35 giraffe::sheep 95.09 zebra::giraffe 0.43 T1: HieCoAtt tie::handbag 100 drier::scissors 0 snowboard::glove 100 fork::glass 0 racket::skis 100 handbag::tie 0 racket::glove 100 motorcycle::airplane 0 backpack::handbag 100 train::airplane 0 Top-5 Bottom-5 T2: LSTM + norm I drier::scissors 100 glove::skis 0 zebra::giraffe 88.98 snowboard::racket 0 boat::airplane 87.87 donut::apple 0 truck::airplane 85.71 glove::surfboard 0 train::airplane 81.93 spoon::bottle 0 T2: HieCoAtt zebra::elephant 94.92 drier::scissors 0 backpack::handbag 94.44 handbag::tie 0 cow::zebra 93.33 broccoli:orange 1.47 bird::sheep 93.11 zebra::giraffe 1.96 orange::carrot 92.37 boat::airplane 2.09 Table 4: Easiest and hardest target::foil pairs: T1 (caption classification) and T2 (foil word detection). To better understand results on T2, we performed an analysis investigating the performance of the VQA models on different target::foil pairs. As reported in Table 4 (right), both models perform nearly perfectly with some pairs and very badly with others. At first glance, it can be noticed that LSTM + norm I is very effective with pairs involving vehicles (airplane, truck, etc.), whereas HieCoAtt seems more effective with pairs involving animate nouns (i.e. animals), though more in depth analysis is needed on this point. More interestingly, some pairs that are found to be predicted almost perfectly by LSTM + I norm, namely boat::airplane, zebra::giraffe, and drier::scissors, turn out to be among the Bottom-5 cases in HieCoAtt. This suggests, on the one hand, that the two VQA models use different strategies to perform the task. On the other hand, it shows that our dataset does not contain cases that are a priori easy for any model. The results of IC-Wang on T3 are much higher than LSTM + norm I and HieCoAtt, although it is outperformed by or is on par with HieCoAtton on T1-T2. Our interpretation is that this behaviour is related to the discriminative/generative nature of our tasks. Specifically, T1 and T2 are discriminative tasks and LSTM + norm I and HieCoAtt are discriminative models. Conversely, T3 is a generative task (a word needs to be generated) and IC-Wang is a generative model. It would be interesting to test other IC models on T3 and compare their results against the ones reported here. However, note that IC-Wang is ‘tailored’ for T3 because it takes as input the whole sentence (minus the word to be generated), while common sequential IC approaches can only generate a word depending on the previous words in the sentence. As far as human performance is concerned, both T1 and T2 turn out to be extremely easy. In T1, image-caption pairs were correctly judged as correct/wrong in overall 914 out of 984 cases (92.89%) in the majority setting. In the unanim262 ity setting, the correct response was provided in 751 out of 984 cases (76.32%). Judging foil captions turns out to be slightly easier than judging correct captions in both settings, probably due to the presence of typos and misspellings that sometimes occur in the original caption (e.g. raters judge as wrong the original caption People playing ball with a drown and white dog, where ‘brown’ was misspelled as ‘drown’). To better understand which factors contribute to make the task harder, we qualitatively analyse those cases where all annotators provided a wrong judgement for an image-caption pair. As partly expected, almost all cases where original captions (thus correct for the given image) are judged as being wrong are cases where the original caption is indeed incorrect. For example, a caption using the word ‘motorcycle’ to refer to a bicycle in the image is judged as wrong. More interesting are those cases where all raters agreed in considering as correct image-caption pairs that are instead foil. Here, it seems that vagueness as well as certain metaphorical properties of language are at play: human annotators judged as correct a caption describing Blue and banana large birds on tree with metal pot (see Fig 3, left), where ‘banana’ replaced ‘orange’. Similarly, all raters judged as correct the caption A cat laying on a bed next to an opened keyboard (see Fig 3, right), where the cat is instead laying next to an opened laptop. Focusing on T2, it is interesting to report that among the correctly-classified foil cases, annotators provided the target word in 97% and 73.6% of cases in the majority and unanimity setting, respectively. This further indicates that finding the foil word in the caption is a rather trivial task for humans. Figure 3: Two cases of foil image-caption pairs that are judged as correct by all annotators. 6 Conclusion We have introduced FOIL-COCO, a large dataset of images associated with both correct and foil captions. The error production is automatically generated, but carefully thought out, making the task of spotting foils particularly challenging. By associating the dataset with a series of tasks, we allow for diagnosing various failures of current LaVi systems, from their coarse understanding of the correspondence between text and vision to their grasp of language and image structure. Our hypothesis is that systems which, like humans, deeply integrate the language and vision modalities, should spot foil captions quite easily. The SoA LaVi models we have tested fall through that test, implying that they fail to integrate the two modalities. To complete the analysis of these results, we plan to carry out a further task, namely ask the system to detect in the image the area that produces the mismatch with the foil word (the red box around the bird in Figure 1.) This extra step would allow us to fully diagnose the failure of the tested systems and confirm what is implicit in our results from task 3: that the algorithms are unable to map particular elements of the text to their visual counterparts. We note that the addition of this extra step will move this work closer to the textual/visual explanation research (e.g., (Park et al., 2016; Selvaraju et al., 2016)). We will then have a pipeline able to not only test whether a mistake can be detected, but also whether the system can explain its decision: ‘the wrong word is dog because the cyclists are in fact approaching a bird, there, in the image’. LaVi models are a great success of recent research, and we are impressed by the amount of ideas, data and models produced in this stimulating area. With our work, we would like to push the community to think of ways that models can better merge language and vision modalites, instead of merely using one to supplement the other. Acknowledgments We are greatful to the Erasmus Mundus European Master in Language and Communication Technologies (EM LCT) for the scholarship provided to the third author. Moreover, we gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used in our research. 263 References Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question answering models. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. SPICE: Semantic Propositional Image Caption Evaluation. In In Proceedings of the European Conference on Computer Vision (ECCV). Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV). https://github.com/ VT-vision-lab/VQA_LSTM_CNN. Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, and Barbara Plank. 2016. Automatic description generation from images: A survey of models, datasets, and evaluation measures. J. Artif. Intell. Res.(JAIR) 55:409–442. Xinlei Chen and C Lawrence Zitnick. 2015. Mind’s eye: A recurrent visual representation for image caption generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 2422–2431. Nan Ding, Sebastian Goodman, Fei Sha, and Radu Soricut. 2016. Understanding image and text simultaneously: a dual vision-language machine comprehension task. arXiv preprint arXiv:1612.07833 . Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 2625–2634. Desmond Elliott and Frank Keller. 2014. Comparing automatic evaluation measures for image description. In In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: Short Papers. pages 452–457. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Doll´ar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 1473–1482. Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. 2015. Are you talking to a machine? dataset and methods for multilingual image question. In Advances in Neural Information Processing Systems. pages 2296–2304. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016a. Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering. arXiv preprint arXiv:1612.00837 . Yash Goyal, Akrit Mohapatra, Devi Parikh, and Dhruv Batra. 2016b. Towards Transparent AI Systems: Interpreting Visual Question Answering Models . In In Proceedings of ICML Visualization Workshop. Micah Hodosh and Julia Hockenmaier. 2016. Focused evaluation for image description with binary forcedchoice tasks. In Proceedings of the 5th Workshop on Vision and Language (VL’16). Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research 47:853–899. Allan Jabri, Armand Joulin, and Laurens van der Maaten. 2016. Revisiting visual question answering baselines. In Proceedings of the European Conference on Computer Vision (ECCV). pages 727–739. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross Girshick. 2016. CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning. ArXiv:1612.06890. Kushal Kafle and Christopher Kanan. 2016. Visual question answering: Datasets, algorithms, and future challenges. arXiv preprint arXiv:1610.01465 . Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 3128–3137. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision. Springer, pages 740–755. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image coattention for visual question answering. In In Proceedings of NIPS 2016. https://github. com/jiasenlu/HieCoAttenVQA. Mateusz Malinowski and Mario Fritz. 2014. A multiworld approach to question answering about realworld scenes based on uncertain input. In Advances in Neural Information Processing Systems. pages 1682–1690. Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. 2015. Ask your neurons: A neural-based approach to answering questions about images. In Proceedings of the IEEE International Conference on Computer Vision. pages 1–9. 264 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Trevor Darrell Bernt Schiele, and Marcus Rohrbach. 2016. Attentive explanations: Justifying decisions and pointing to the evidence. ArXiv:1612.04757. Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Exploring models and data for image question answering. In Advances in Neural Information Processing Systems (NIPS 2015). Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. 2016. Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization. ArXiv:1610.02391v2. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 3156–3164. Cheng Wang, Haojin Yang, Christian Bartz, and Christoph Meinel. 2016. Image captioning with deep bidirectional LSTMs. In Proceedings of the 2016 ACM on Multimedia Conference. ACM, pages 988–997. Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. 2016. Visual question answering: A survey of methods and datasets. arXiv preprint arXiv:1607.05910 . Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics 2:67–78. Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Yin and yang: Balancing and answering binary visual questions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 5014–5022. Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. Simple baseline for visual question answering. arXiv preprint arXiv:1512.02167 . 265
2017
24
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 266–276 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1025 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 266–276 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1025 VERB PHYSICS: Relative Physical Knowledge of Actions and Objects Maxwell Forbes Yejin Choi Paul G. Allen School of Computer Science & Engineering University of Washington {mbforbes,yejin}@cs.washington.edu Abstract Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e.g., “My house is bigger than me.” However, while rarely stated explicitly, this trivial everyday knowledge does influence the way people talk about the world, which provides indirect clues to reason about the world. For example, a statement like, “Tyler entered his house” implies that his house is bigger than Tyler. In this paper, we present an approach to infer relative physical knowledge of actions and objects along five dimensions (e.g., size, weight, and strength) from unstructured natural language text. We frame knowledge acquisition as joint inference over two closely related problems: learning (1) relative physical knowledge of object pairs and (2) physical implications of actions when applied to those object pairs. Empirical results demonstrate that it is possible to extract knowledge of actions and objects from language and that joint inference over different types of knowledge improves performance. 1 Introduction Reading and reasoning about natural language text often requires trivial knowledge about everyday physical actions and objects. For example, given a sentence “Shanice could fit the trophy into the suitcase,” we can trivially infer that the trophy must be smaller than the suitcase even though it is not stated explicitly. This reasoning requires knowledge about the action “fit”—in particular, typical preconditions that need to be satisfied in order to perform the action. In addition, reasoning Natural language clues approx max width Relative physical knowledge about objects Physical implications of actions “She barged into the stable.” HUMAN STABLE size: smaller weight: lighter speed: faster strength: n/a rigidness: less rigid x barged into y ⇒ x is smaller than y ⇒ x is lighter than y ⇒ x is faster than y ⇒ x is less rigid than y Figure 1: An overview of our approach. A verb’s usage in language (top) implies physical relations between objects it takes as arguments. This allows us to reason about properties of specific objects (middle), as well as the knowledge implied by the verb itself (bottom). about the applicability of various physical actions in a given situation often requires background knowledge about objects in the world, for example, that people are usually smaller than houses, that cars generally move faster than humans walk, or that a brick probably is heavier than a feather. In fact, the potential use of such knowledge about everyday actions and objects can go beyond language understanding and reasoning. Many open challenges in computer vision and robotics may also benefit from such knowledge, as shown 266 in recent work that requires visual reasoning and entailment (Izadinia et al., 2015; Zhu et al., 2014). Ideally, an AI system should acquire such knowledge through direct physical interactions with the world. However, such a physically interactive system does not seem feasible in the foreseeable future. In this paper, we present an approach to acquire trivial physical knowledge from unstructured natural language text as an alternative knowledge source. In particular, we focus on acquiring relative physical knowledge of actions and objects organized along five dimensions: size, weight, strength, rigidness, and speed. Figure 1 illustrates example knowledge of (1) relative physical relations of object pairs and (2) physical implications of actions when applied to those object pairs. While natural language text is a rich source to obtain broad knowledge about the world, compiling trivial commonsense knowledge from unstructured text is a nontrivial feat. The central challenge lies in reporting bias: people rarely states the obvious (Gordon and Van Durme, 2013; Sorower et al., 2011; Misra et al., 2016; Zhang et al., 2017), since it goes against Grice’s conversational maxim on the quantity of information (Grice, 1975). In this work, we demonstrate that it is possible to overcome reporting bias and still extract the unspoken knowledge from language. The key insight is this: there is consistency in the way people describe how they interact with the world, which provides vital clues to reverse engineer the common knowledge shared among people. More concretely, we frame knowledge acquisition as joint inference over two closely related puzzles: inferring relative physical knowledge about object pairs while simultaneously reasoning about physical implications of actions. Importantly, four of five dimensions of knowledge in our study—weight, strength, rigidness, and speed—are either not visual or not easily recognizable by image recognition using currently available computer vision techniques. Thus, our work provides unique value to complement recent attempts to acquire commonsense knowledge from web images (Izadinia et al., 2015; Bagherinezhad et al., 2016; Sadeghi et al., 2015). In sum, our contributions are threefold: • We introduce a new task in the domain of commonsense knowledge extraction from language, focusing on the physical implications of actions and the relative physical relations among objects, organized along five dimensions. • We propose a model that can infer relations over grounded object pairs together with first order relations implied by physical verbs. • We develop a new dataset VERBPHYSICS that compiles crowdsourced knowledge of actions and objects.1 The rest of the paper is organized as follows. We first provide the formal definition of knowledge we aim to learn in Section 2. We then describe our data collection in Section 3 and present our inference model in Section 4. Empirical results are given in Section 5 and discussed in Section 6. We review related work in Section 7 and conclude in Section 8. 2 Representation of Relative Physical Knowledge 2.1 Knowledge Dimensions We consider five dimensions of relative physical knowledge in this work: size, weight, strength, rigidness, and speed. “Strength” in our work refers to the physical durability of an object (e.g., “diamond” is stronger than “glass”), while “rigidness” refers to the physical flexibility of an object (e.g., “glass” is more rigid than a “wire”). When considered in verb implications, size, weight, strength, and rigidness concern individual-level semantics; the relative properties implied by verbs in these dimensions are true in general. On the other hand, speed concerns stage-level semantics; its implied relations hold only during a window surrounding the verb.2 2.2 Relative physical knowledge Let us first consider the problem of representing relative physical knowledge between two objects. We can write a single piece of knowledge like “A person is larger than a basketball” as person >size basketball Any propositional statement can have exceptions and counterexamples. Moreover, we need to cope 1https://uwnlp.github.io/verbphysics/ 2We thank reviewer two for pointing us to this terminology and for the illustrative example: “When a person throws a ball, the ball is faster than the person (stage-level) but it’s not true in general that balls are faster than people (individuallevel).” 267 approx max width action theme I agent,theme agent goal I theme,goal I agent,goal “He threw the ball” I agent,theme x threw y ⇒ x is larger than y ⇒ x is heavier than y ⇒ x is slower than y “We walked into the house” x walked into y ⇒ x is smaller than y ⇒ x is lighter than y ⇒ x is faster than y I agent,goal “I squashed the bug with my boot” squashed x with y ⇒ x is smaller than y ⇒ x is lighter than y ⇒ x is weaker than y I theme,goal ⇒ x is less rigid than y ⇒ x is slower than y Figure 2: Example physical implications represented as frame relations between a pair of arguments. with uncertainties involved in knowledge acquisition. Therefore, we assume each piece of knowledge is associated with a probability distribution. More formally, given objects x and y, we define a random variable Oa x,y whose range is {>, <, ≃} with respect to a knowledge dimension a ∈ {SIZE,WEIGHT,STRENGTH,RIGIDNESS,SPEED} so that: P(Oa x,y = r), r ∈{>, <, ≃}. This immediately provides two simple properties: P(Ox,y = >) = P(Oy,x = <) P(Ox,x = ≃) = 1 2.3 Physical Implications of Verbs Next we consider representing relative physical implications of actions applied over two objects. For example, consider an action frame “x threw y.” In general, following implications are likely to be true: “x threw y” =⇒x >size y “x threw y” =⇒x >weight y “x threw y” =⇒x <speed y Again, in order to cope with exceptions and uncertainties, we assume a probability distribution associated with each implication. More formally, we define a random variable F a v to denote the implication of the action verb v when applied over its arguments x and y with respect to a knowledge dimension a so that: P(F size threw = >) := P(“x threw y” ⇒x >size y) P(F wgt threw = >) := P(“x threw y” ⇒x >wgt y) where the range of F size threw is {>, <, ≃}. Intuitively, F size threw represents the likely first order relation implied by “throw” over ungrounded (i.e., variable) object pairs. The above definition assumes that there is only a single implication relation for any given verb with respect to a specific knowledge dimension. This is generally not true, since a verb, especially a common action verb, can often invoke a number of different frames according to frame semantics (Fillmore, 1976). Thus, given a number of different frame relations v1...vT associated with a verb v, we define random variables F with respect to a specific frame relation vt, i.e., F a vt. We use this notation going forward. Frame Perspective on Verb Implications: Figure 2 illustrates the frame-centric view of physical implication knowledge we aim to learn. Importantly, the key insight of our work is inspired by Fillmore’s original manuscript on frame semantics (Fillmore, 1976). Fillmore has argued that “frames”—the contexts in which utterances are situated—should be considered as a third primitive of describing a language, along with a grammar and lexicon. While existing frame annotations such as FrameNet (Baker et al., 1998), PropBank (Palmer et al., 2005), and VerbNet (Kipper et al., 2000) provide rich frame knowledge associated 268 with a predicate, none of them provide the exact kind of physical implications we consider in our paper. Thus, our work can potentially contribute to these resources by investigating new approaches to automatically recover richer frame knowledge from language. In addition, our work is motivated by the formal semantics of Dowty (1991), as the task of learning verb implications is essentially that of extracting lexical entailments for verbs. 3 Data and Crowdsourced Knowledge Action Verbs: We pick 50 classes of Levin verbs from both “alternation classes” and “verb classes” (Levin, 1993), which corresponds to about 1100 unique verbs. We sort this list by frequency of occurrence in our frame patterns in the Google Syntax Ngrams corpus (Goldberg and Orwant, 2013) and pick the top 100 verbs. Action Frames: Figure 2 illustrates examples of action frame relations. Because we consider implications over pairwise argument relations for each frame, there are sometimes multiple frame relations we consider for a single frame. To enumerate action frame relations for each verb, we use syntactic patterns based on dependency parse by extracting the core components (subject, verb, direct object, prepositional object) of an action, then map the subject to an agent, the direct object to a theme, and the prepositional object to a goal.3 For those frames that involve an argument in a prepositional phrase, we create a separate frame for each preposition based on the statistics observed in the Google Syntax Ngram corpus. Because the syntax ngram corpus provides only tree snippets without context, this way of enumerating potential frame patterns tend to overgenerate. Thus we refine our prepositions for each frame by taking either the intersection or union with the top 5 Google Surface Ngrams (Michel et al., 2011), depending on whether the frame was under- or over-generating. We also add an additional crowdsourcing step where we ask crowd workers to judge whether a frame pattern with a particular verb and preposition could plausibly be found in a sentence. This process results in 813 frame templates, an average of 8.13 per verb. 3Future research could use an SRL parser instead. We use dependency parse to benefit from the Google Syntax Ngram dataset that provides language statistics over an extremely large corpus, which does not exist for SRL. Data collected Total Seed / dev / test Verbs5% 100 5 / 45 / 50 Verbs20% ” 20 / 30 / 50 Frames5% 813 65 / 333 / 415 Frames20% ” 188 / 210 / 415 Object pairs5% 3656 183 / 1645 / 1828 Object pairs20% ” 733 / 1096 / 1828 Per attribute frame statistics Agreement Counts (usable) 2/3 3/3 Verbs Frames size 0.91 0.41 96 615 weight 0.90 0.33 97 562 strength 0.88 0.25 95 465 rigidness 0.87 0.26 89 432 speed 0.93 0.36 88 420 Per attribute object pair statistics Agreement Counts (usable) 2/3 3/3 Distinct objs Pairs size 0.95 0.59 210 2552 weight 0.95 0.56 212 2586 strength 0.92 0.43 208 2335 rigidness 0.91 0.39 212 2355 speed 0.90 0.38 209 2184 Table 1: Statistics of crowdsourced knowledge. Frames are partitioned by verb. Counts are shown for usable data, which includes only ≥2/3 agreement and removes all with “no relation.” Each prediction task (frames or object pairs) is given 5% of that domain’s data as seed. We compare models using either 5% or 20% of the other domain’s data as seed. Object Pairs: To provide a source of ground truth relations between objects, we select the object pairs that occur in the 813 frame templates with positive pointwise mutual information (PMI) across the Google Syntax Ngram corpus. After replacing a small set of “human” nouns with a generic HUMAN object, filtering out nouns labeled as abstract by WordNet (Miller, 1995), and distilling all surface forms to their lemmas (also with WordNet), the result is 3656 object pairs. 3.1 Crowdsourcing Knowledge We collect human judgements of the frame knowledge implications to use as a small set of seed knowledge (5%), a development set (45%), and a test set (50%). Crowd workers are given with a frame template such as “x threw y,” and then asked to list a few plausible objects (including people and animals) for the missing slots (e.g., x and y).4 4This step is to prime them for thinking about the particular template; we do not use the objects they provided. 269 We then ask them to rate the general relationship that the arguments of the frame exhibit with respect to all knowledge dimensions (size, weight, etc.). For each knowledge dimension, or attribute, a, workers select an answer from (1) x >a y, (2) x <a y, (3) x ≃a y, or (4) no general relation. We conduct a similar crowdsourcing step for the set of object pairs. We ask crowd workers to compare each of the 3656 object pairs along the five knowledge dimensions we consider, selecting an answer from the same options above as with frames. We reserve 50% of the data as a test set, and split the remainder up either 5% / 45% or 20% / 30% (seed / development) to investigate the effects of different seed knowledge sizes on the model. Statistics for the dataset are provided in Table 1. About 90% of the frames as well as object pairs had 2/3 agreement between workers. After removing frame/attribute combinations and object pairs that received less than 2/3 agreement, or were selected by at least 2/3 workers to have no relation, we end up with roughly 400–600 usable frames and 2100–2500 usable object pairs per attribute. 4 Model We model knowledge acquisition as probabilistic inference over a factor graph of knowledge. As shown in Figure 3, the graph consists of multiple substrates (page-wide boxes) corresponding to different knowledge dimensions (shown only three of them —strength, size, weight—for brevity). Each substrate consists of two types of sub-graphs: verb subgraphs and object subgraphs, which are connected through factors that quantify action–object compatibilities. Connecting across substrates are factors that model inter-dependencies across different knowledge dimensions. In what follows, we describe each graph component. 4.1 Nodes The factor graph contains two types of nodes in order to capture two classes of knowledge. The first type of nodes are object pair nodes. Each object pair node is a random variable Oa x,y which captures the relative strength of an attribute a between objects x and y. The second type of nodes are frame nodes. Each frame node is a random variable F a vt. This corresponds to the verb v used in a particular type of frame t, and captures the implied knowledge the frame vt holds along an attribute a. All random variables take on the values {>, <, ≃}. For an object pair node Oa x,y, the value represents the belief about the relation between x and y along the attribute a. For a frame node F a vt, the value represents the belief about the relation along the attribute a between any two objects that might be used in the frame vt. We denote the sets of all object pair and frame random variables O and F, respectively. 4.2 Action–Object Compatibility The key aspect of our work is to reason about two types of knowledge simultaneously: relative knowledge of grounded object pairs, and implications of actions related to those objects. Thus we connect the verb subgraphs and object subgraphs through selectional preference factors ψs between two such nodes Oa x,y and F a vt if we find evidence from text that suggests objects x and y are used in the frame vt. These factors encourage both random variables to agree on the same value. As an example, consider a node Osize p,b which represents the relative size of a person and a basketball, and a node F size threwdobj which represents the relative size implied by an “x threw y” frame. If we find significant evidence in text that “[person] threw [basketball]” occurs, we would add a selectional preference factor to connect Osize p,b with F size threwdobj and encourage them towards the same value. This means that if it is discovered that people are larger than basketballs (the value >), then we would expect the frame “x threw y” to entail x >size y (also the value >). 4.3 Semantic Similarities Some frames have relatively sparse text evidences to support their corresponding knowledge acquisition. Thus, we include several types of factors based on semantic similarities as described below. Cross-Verb Frame Similarity: We add a group of factors ψv between two verbs v and u (to connect a specific frame of v with a corresponding frame of u) based on the verb-level similarities. Within-Verb Frame Similarity: Within each verb v, which consists of a set of frame relations v1, ...vT , we also include frame-level similarity factors ψf between vi and vj. This gives us more evidence over a broader range of frames when textual evidence might be sparse. 270 vsize squish vsize throw vsize walk vweight throw vweight walk … … … F size throw1 F size throw2 F size throw3 F size throw4 f a o v s f v v a s s v a a frames for vsize throw o o s hardness random variable (RV) group of RVs factor connects RV factors connect subset of RVs verb similarity frame similarity object similarity attribute similarity selectional preference f a o v s attribute subgraphs subgraphs object verb subgraphs size w eight Osize s,t Osize p,q Osize q,r Osize p,s strength vstrength squish Ostrength p,t Ostrength p,q Figure 3: High level view of the factor graph model. Performance on both learning relative knowledge about objects (right), as well as entailed knowledge from verbs (center) via realized frames (left), is improved by modeling their interplay (orange). Unary seed (ψseed) and embedding (ψemb) factors are omitted for clarity. Object Similarity: As with verbs, we add factors ψo that encourage similar pairs of objects to take the same value. Given that each node represents a pair of objects, finding that x and y are similar yields two main cases in how to add factors (aside from the trivial case where the variable Oa x,y is given a unary factor to encourage the value ≃). 1. If nodes Ox,z and Oy,z exist, we expect objects x and y to both have a similar relation to z. We add a factor that encourages Ox,z and Oy,z to take the same value. The same is true if nodes Oz,x and Oz,y exist. 2. On the other hand, if nodes Ox,z and Oz,y exist, we expect these two nodes to reach the opposite decision. In this case, we add a factor that encourages one node to take the value > if the other prefers the value <, and vice versa. (For the case of ≃, if one prefers that value, then both should.) 4.4 Cross-Knowledge Correlation Some knowledge dimensions, such as size and weight, have a significant correlation in their implied relations. For two such attributes a and b, if the same frame F a vi and F b vi exists in both graphs, we add a factor ψa between them to push them towards taking the same value. 4.5 Seed Knowledge In order to kick off learning, we provide a small set of seed knowledge among the random variables in {O, F} with seed factors ψseed. These unary seed factors push the belief for its associated random variable strongly towards the seed label. 4.6 Potential Functions Unary Factors: For all frame and object pair random variables in the training set, we train a maximum entropy classifier to predict the value of the variable. We then use the probabilities of the classifier as potentials for seed factors given to all random variables in their class (frame or object pair). Each log-linear classifier is trained separately per attribute on a featurized vector of the variable: P(r|Xa) ∝ewa·f(Xa) The feature function is defined differently according to the node type: f(Oa p,q) := ⟨g(p), g(q)⟩ f(F a vt) := ⟨h(t), g(v), g(t)⟩ 271 Development Test Algorithm size weight stren rigid speed overall size weight stren rigid speed overall RANDOM 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 MAJORITY 0.38 0.41 0.42 0.18 0.83 0.43 0.35 0.35 0.43 0.20 0.88 0.44 EMB-MAXENT 0.62 0.64 0.60 0.83 0.83 0.69 0.55 0.55 0.59 0.79 0.88 0.66 OUR MODEL (A) 0.71 0.63 0.61 0.82 0.83 0.71 0.55 0.55 0.55 0.79 0.89 0.65 OUR MODEL (B) 0.75 0.68 0.68 0.82 0.78 0.74 0.74 0.71 0.65 0.80 0.87 0.75 Development Test Algorithm size weight stren rigid speed overall size weight stren rigid speed overall RANDOM 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 MAJORITY 0.50 0.54 0.51 0.50 0.53 0.51 0.51 0.55 0.52 0.49 0.50 0.51 EMB-MAXENT 0.68 0.66 0.64 0.67 0.65 0.66 0.71 0.67 0.64 0.65 0.63 0.66 OUR MODEL (A) 0.74 0.69 0.67 0.68 0.66 0.69 0.68 0.70 0.66 0.66 0.60 0.66 OUR MODEL (B) 0.75 0.74 0.71 0.68 0.66 0.71 0.75 0.76 0.72 0.65 0.61 0.70 Table 2: Accuracy of baselines and our model on both tasks. Top: frame prediction task; bottom: object pair prediction task. In both tasks 5% of in-domain data (frames or object pairs, respectively) are available as seed data. We compare providing the other type of data (object pairs or frames, respectively) as seed knowledge, trying 5% (OUR MODEL (A)) and 20% (OUR MODEL (B)). Here g(x) is the GloVe word embedding (Pennington et al., 2014) for the word x (t is the frame relation’s preposition, and g(t) is simply set to the zero vector if there is no preposition) and h(t) is a one-hot vector of the frame relation type. We use GloVe vectors of 100 dimensions for verbs and 50 dimensions for objects and prepositions (the dimensions picked based on development set). Binary Factors: In the case of all other factors, we use a “soft 1” agreement matrix with strong signal down the diagonals:   > ≃ < > 0.7 0.1 0.2 ≃ 0.15 0.7 0.15 < 0.2 0.1 0.7   4.7 Inference After our full graph is constructed, we use belief propagation to infer the assignments of frames and object pairs not in our training data. Each message µ is a vector where each element is the probability that a random variable takes on each value x ∈{>, <, ≃}. A message passed from a random variable v to a neighboring factor f about the value x is the product of the messages from its other neighboring factors about x: µv→f(x) ∝ Y f′∈N(v)\{f} µf′→v(x) A message passed from a factor f with potential ψ to a random variable v about its value x is a marginalized belief about v taking value x from the other neighboring random variables combined with its potential: µf→v(x) ∝ X x:x[v]=x ψ(x) Y v′∈N(f)\{v} µv′→f(x[v′]) After stopping belief propagation, the marginals for a node can be computed and used as a decision for that random variable. The marginal for v taking value x is the product of its surrounding factors’ messages: v(x) ∝ Y f∈N(v) µf→v(x) 5 Experimental Results Factor Graph Construction: We first need to pick a set of frames and objects to determine our set of random variables. The frames are simply the subset of the frames that were crowdsourced in the given configuration (e.g., seed + dev), with “soft 1” unary seed factors (the gold label indexed row of the binary factor matrix) given only to those in the seed set. The same selection criteria and seed factors are applied to the crowdsourced object pairs. For lexical similarity factors (ψv, ψo), we pick connections based on the cosine similarity scores of GloVe vectors thresholded above a value chosen based on development set performance. Attribute similarity factors (ψa) are chosen based on sets of frames that reach largely the same decisions on the seed data (95%). Frame similarity factors (ψf) are added to pairs of frames with linguistically similar constructions. Finally, selectional preference 272 Attr approx max width Frame gloss Score Example model predictions (frame) (dev set) Ex ___ opened ___ size 1 PERSON set ___ upon ___ wgt 2 ___ stood on ___ str 3 PERSON arrived on ___ rgd 4 ___ put up ___ spd 5 ' > < close ans to judge comparison ing on here) ay possible PERSON drove ___ for ___ size 6 PERSON stopped ___with ___ wgt 7 ___ lived at ___ str 8 ___ snipped off ___ rgd 9 ___ caught ___ spd 10 ' > < Figure 4: Example model predictions on dev set frames. The model’s confidence is shown by the bars on the right. The correct relation is highlighted in orange (6–10 are failure cases for the model). If there are two blanks, the relation is between them. If there is only one blank, the relation is between PERSON and the blank. Note that ≃ receives miniscule weight because it is never the correct value for frames in the seed set. factors (ψs) are picked by using a threshold (also tuned on the development set) of pointwise mutual information (PMI) between the frames and the object pairs’ occurrences in the Google Syntax Ngram corpus. For each task, we consider the set of factors to include in each model a hyperparameter, which is also tuned on the development set. Baselines: Baselines include making a RANDOM choice, picking between >, <, and ≃), picking the MAJORITY label, and a maximum entropy classifier based on the embedding representations (EMB-MAXENT) defined in Section 4.6. Inferring Knowledge of Actions: Our first experiment is to predict knowledge implied by new frames. In this task, 5% of the frames are available as seed knowledge. We experiment with two different sets of seed knowledge for the object pair data: OUR MODEL (A) uses only 5% of the object pair data as seed, and OUR MODEL (B) uses 20%. The full results for the baseline methods and our model are given in the upper half of Table 2. Our model outperforms the baselines on all attributes except for the speed, which has a highly skewed label distribution to allow the majority baseline to Ablated (or added) component Accuracy – Verb similarity 0.69 + Frame similarity 0.62 – Action-object compatibility 0.62 – Object similarity 0.70 + Attribute similarity 0.62 – Frame embeddings 0.63 – Frame seeds 0.62 – Object embeddings 0.62 – Object seeds 0.62 OUR MODEL (A) 0.71 Table 3: Ablation results on size attribute for the frame prediction task on the development dataset for OUR MODEL (A) (5% of the object pairs as seed data). We find that different graph configurations improve performance for different tasks and data amounts. In this setting, frame and attribute similarity factors hindered performance. perform well. Ablations are given in Table 3, and sample correct predictions from the development set are shown in examples 1–5 of Figure 4. Inferring Knowledge of Objects: Our second experiment is to predict the correct relations of new object pairs. The data for this task is the inverse of before: 5% of the object pairs are available as seed knowledge, and we experiment with both 5% (OUR MODEL (A)) and 20% (OUR MODEL (B)) frames given as seed data. Again, both are independently tuned on the development data. Results for this task are presented in the lower half of Table 2. While OUR MODEL (A) is competitive with the strongest baseline, introducing the additional frame data allows OUR MODEL (B) to reach the highest accuracy. 6 Discussion Metaphorical Language: While our frame patterns are intended to capture action verbs, our templates also match senses of those verbs that can be used with abstract or metaphorical arguments, rather than directly physical ones. One example from the development set is “x contained y.” While x and y can be real objects, more abstract senses of “contained” could involve y as a “forest fire” or even a “revolution.” In these instances, x >size y is plausible as an abstract notion: if some entity can contain a revolution, we might think that entity as “larger” or “stronger” than the revolution. Error analysis: Examples 6–10 in Figure 4 highlight failure cases for the model. Example 273 6 shows a case where the comparison is nonsensical because “for” would naturally be followed by a purpose (“He drove the car for work.”) or a duration (“She drove the car for hours.”) rather than a concrete object whose size is measurable. Example 7 highlights an underspecified frame. One crowd worker provided the example, “PERSON stopped the fly with {the jar / a swatter},” where fly <weight {jar, swatter}. However, two crowd workers provided examples like “PERSON stopped their car with the brake,” where clearly car >weight brake. This example illustrates complex underlying physics we do not model: a brake—the pedal itself—is used to stop a car, but it does so by applying significant force through a separate system. The next two examples are cases where the model nearly predicts correctly (8, e.g., “She lived at the office.”) and is just clearly wrong (9, e.g., “He snipped off a locket of hair”). Example 10 demonstrates a case of polysemy where the model picks the wrong side. In the phrase, “She caught the runner in first,”, it is correct that she >speed runner. However, the sense chosen by the crowd workers is that of, “She caught the baseball,” where indeed she <speed baseball. 7 Related work Several works straddle the gap between IE, knowledge base completion, and learning commonsense knowledge from text. Earlier works in these areas use large amounts of text to try to extract general statements like “A THING CAN BE READABLE” (Gordon et al., 2010) and frequencies of events (Gordon and Schubert, 2012). Our work focuses on specific domains of knowledge rather than general statements or occurrence statistics, and develops a frame-centric approach to circumvent reporting bias. Other work uses a knowledge base and scores unseen tuples based on similarity to existing ones (Angeli and Manning, 2013; Li et al., 2016). Relatedly, previous work uses natural language inference to infer new facts from a dataset of commonsense facts that can be extracted from unstructured text (Angeli and Manning, 2014). In contrast, we focus on a small number of specific types of knowledge without access to an existing database of knowledge. A number of recent works combine multimodal input to learn visual attributes (Bruni et al., 2012; Silberer et al., 2013), extract commonsense knowledge from web images (Tandon et al., 2016), and overcome reporting bias (Misra et al., 2016). In contrast, we focus on natural language evidence to reason about attributes that are both in (size) and out (weight, rigidness, etc.) of the scope of computer vision. Yet other works mine numerical attributes of objects (Narisawa et al., 2013; Takamura and Tsujii, 2015; Davidov and Rappoport, 2010) and comparative knowledge from the web (Tandon et al., 2014). Our work uniquely learns verb-centric lexical entailment knowledge. A handful of works have attempted to learn the types of knowledge we address in this work. One recent work tried to directly predict several binary attributes (such “is large” and “is yellow”) from on off-the-shelf word embeddings, noting that accuracy was very low (Rubinstein et al., 2015). Another line of work addressed grounding verbs in the context of robotic tasks. One paper in this line acquires verb meanings by observing state changes in the environment (She and Chai, 2016). Another work in this line does a deep investigation of eleven verbs, modeling their physical effect via annotated images along eighteen attributes (Gao et al., 2016). These works are encouraging investigations into multimodal groundings of a small set of verbs. Our work instead grounds into a fixed set of attributes but leverages language on a broader scale to learn about more verbs in more diverse set of frames. 8 Conclusion We presented a novel take on verb-centric frame semantics to learn implied physical knowledge latent in verbs. Empirical results confirm that by modeling changes in physical attributes entailed by verbs together with objects that exhibit these properties, we are able to better infer new knowledge in both domains. Acknowledgements This research is supported in part by the National Science Foundation Graduate Research Fellowship, DARPA CwC program through ARO (W911NF-15-1-0543), the NSF grant (IIS1524371), and gifts by Google and Facebook. The authors thank the anonymous reviewers for their thorough and insightful comments. 274 References Gabor Angeli and Christopher D Manning. 2013. Philosophers are mortal: Inferring the truth of unseen facts. In CoNLL. pages 133–142. Gabor Angeli and Christopher D Manning. 2014. Naturalli: Natural logic inference for common sense reasoning. In EMNLP. pages 534–545. Hessam Bagherinezhad, Hannaneh Hajishirzi, Yejin Choi, and Ali Farhadi. 2016. Are elephants bigger than butterflies? reasoning about sizes of objects. arXiv preprint arXiv:1602.00753 . Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational LinguisticsVolume 1. Association for Computational Linguistics, pages 86–90. Elia Bruni, Gemma Boleda, Marco Baroni, and NamKhanh Tran. 2012. Distributional semantics in technicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, pages 136–145. Dmitry Davidov and Ari Rappoport. 2010. Extraction and approximation of numerical attributes from the web. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1308–1317. David Dowty. 1991. Thematic proto-roles and argument selection. language pages 547–619. Charles J Fillmore. 1976. Frame semantics and the nature of language. Annals of the New York Academy of Sciences 280(1):20–32. Qiaozi Gao, Malcolm Doering, Shaohua Yang, and Joyce Y Chai. 2016. Physical causality of action verbs in grounded language understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). volume 1, pages 1814–1824. Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of english books. In Second Joint Conference on Lexical and Computational Semantics (* SEM). volume 1, pages 241–247. Jonathan Gordon and Lenhart K Schubert. 2012. Using textual patterns to learn expected event frequencies. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. Association for Computational Linguistics, pages 122–127. Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proceedings of the 2013 workshop on Automated knowledge base construction. ACM, pages 25–30. Jonathan Gordon, Benjamin Van Durme, and Lenhart K Schubert. 2010. Learning from the web: Extracting general world knowledge from noisy text. In Collaboratively-Built Knowledge Sources and AI. HP Grice. 1975. Logic and conversation. In P. Cole and J. Morgan, editors, Syntax and Semantics. Academic Press, New York, volume 3: Speech Acts. Hamid Izadinia, Fereshteh Sadeghi, Santosh K Divvala, Hannaneh Hajishirzi, Yejin Choi, and Ali Farhadi. 2015. Segment-phrase table for semantic segmentation, visual entailment and paraphrasing. In Proceedings of the IEEE International Conference on Computer Vision. pages 10–18. Karin Kipper, Hoa Trang Dang, Martha Palmer, et al. 2000. Class-based construction of a verb lexicon. AAAI/IAAI 691:696. Beth Levin. 1993. English verb classes and alternations: A preliminary investigation. University of Chicago press. Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany, August. Association for Computational Linguistics. Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K Gray, Joseph P Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, et al. 2011. Quantitative analysis of culture using millions of digitized books. science 331(6014):176–182. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39– 41. Ishan Misra, C Lawrence Zitnick, Margaret Mitchell, and Ross Girshick. 2016. Seeing through the human reporting bias: Visual classifiers from noisy humancentric labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 2930–2939. Katsuma Narisawa, Yotaro Watanabe, Junta Mizuno, Naoaki Okazaki, and Kentaro Inui. 2013. Is a 204 cm man tall or small? acquisition of numerical common sense from the web. In ACL (1). pages 382– 391. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics 31(1):71– 106. 275 Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532– 1543. http://www.aclweb.org/anthology/D14-1162. Dana Rubinstein, EffiLevi, Roy Schwartz, and Ari Rappoport. 2015. How well do distributional models capture different types of semantic knowledge? In ACL (2). pages 726–730. Fereshteh Sadeghi, Santosh K Kumar Divvala, and Ali Farhadi. 2015. Viske: Visual knowledge extraction and question answering by visual verification of relation phrases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 1456–1464. Lanbo She and Joyce Y Chai. 2016. Incremental acquisition of verb hypothesis space towards physical world interaction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2013. Models of semantic representation with visual attributes. In ACL (1). pages 572–582. Mohammad S Sorower, Janardhan R Doppa, Walker Orr, Prasad Tadepalli, Thomas G Dietterich, and Xiaoli Z Fern. 2011. Inverting grice’s maxims to learn rules from natural language extractions. In Advances in neural information processing systems. pages 1053–1061. Hiroya Takamura and Jun’ichi Tsujii. 2015. Estimating numerical attributes by bringing together fragmentary clues. In HLT-NAACL. pages 1305–1310. Niket Tandon, Gerard De Melo, and Gerhard Weikum. 2014. Acquiring comparative commonsense knowledge from the web. In AAAI. pages 166–172. Niket Tandon, Charles Hariman, Jacopo Urbani, Anna Rohrbach, Marcus Rohrbach, and Gerhard Weikum. 2016. Commonsense in parts: Mining part-whole relations from the web and image tags. In AAAI. pages 243–250. Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal common-sense inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Yuke Zhu, Alireza Fathi, and Li Fei-Fei. 2014. Reasoning about object affordances in a knowledge base representation. In European conference on computer vision. Springer, pages 408–424. 276
2017
25
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 277–287 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1026 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 277–287 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1026 A* CCG Parsing with a Supertag and Dependency Factored Model Masashi Yoshikawa and Hiroshi Noji and Yuji Matsumoto Graduate School of Information and Science Nara Institute of Science and Technology 8916-5, Takayama, Ikoma, Nara, 630-0192, Japan { masashi.yoshikawa.yh8, noji, matsu }@is.naist.jp Abstract We propose a new A* CCG parsing model in which the probability of a tree is decomposed into factors of CCG categories and its syntactic dependencies both defined on bi-directional LSTMs. Our factored model allows the precomputation of all probabilities and runs very efficiently, while modeling sentence structures explicitly via dependencies. Our model achieves the stateof-the-art results on English and Japanese CCG parsing.1 1 Introduction Supertagging in lexicalized grammar parsing is known as almost parsing (Bangalore and Joshi, 1999), in that each supertag is syntactically informative and most ambiguities are resolved once a correct supertag is assigned to every word. Recently this property is effectively exploited in A* Combinatory Categorial Grammar (CCG; Steedman (2000)) parsing (Lewis and Steedman, 2014; Lewis et al., 2016), in which the probability of a CCG tree y on a sentence x of length N is the product of the probabilities of supertags (categories) ci (locally factored model): P(y|x) = ∏ i∈[1,N] Ptag(ci|x). (1) By not modeling every combinatory rule in a derivation, this formulation enables us to employ efficient A* search (see Section 2), which finds the most probable supertag sequence that can build a well-formed CCG tree. Although much ambiguity is resolved with this supertagging, some ambiguity still remains. Figure 1 shows an example, where the two CCG 1 Our software and the pretrained models are available at: https://github.com/masashi-y/depccg. (a) a house in Paris in France NP (NP\NP)/NP NP (NP\NP)/NP NP > > NP\NP NP\NP < NP < NP (b) a house in Paris in France NP (NP\NP)/NP NP (NP\NP)/NP NP > NP\NP < NP > NP\NP < NP Figure 1: CCG trees that are equally likely under Eq. 1. Our model resolves this ambiguity by modeling the head of every word (dependencies). parses are derived from the same supertags. Lewis et al.’s approach to this problem is resorting to some deterministic rule. For example, Lewis et al. (2016) employ the attach low heuristics, which is motivated by the right-branching tendency of English, and always prioritizes (b) for this type of ambiguity. Though for English it empirically works well, an obvious limitation is that it does not always derive the correct parse; consider a phrase “a house in Paris with a garden”, for which the correct parse has the structure corresponding to (a) instead. In this paper, we provide a way to resolve these remaining ambiguities under the locally factored model, by explicitly modeling bilexical dependencies as shown in Figure 1. Our joint model is still locally factored so that an efficient A* search can be applied. The key idea is to predict the head of every word independently as in Eq. 1 with a strong unigram model, for which we utilize the scoring model in the recent successful graph-based dependency parsing on LSTMs (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2016). Specif277 ically, we extend the bi-directional LSTM (biLSTM) architecture of Lewis et al. (2016) predicting the supertag of a word to predict the head of it at the same time with a bilinear transformation. The importance of modeling structures beyond supertags is demonstrated in the performance gain in Lee et al. (2016), which adds a recursive component to the model of Eq. 1. Unfortunately, this formulation loses the efficiency of the original one since it needs to compute a recursive neural network every time it searches for a new node. Our model does not resort to the recursive networks while modeling tree structures via dependencies. We also extend the tri-training method of Lewis et al. (2016) to learn our model with dependencies from unlabeled data. On English CCGbank test data, our model with this technique achieves 88.8% and 94.0% in terms of labeled and unlabeled F1, which mark the best scores so far. Besides English, we provide experiments on Japanese CCG parsing. Japanese employs freer word order dominated by the case markers and a deterministic rule such as the attach low method may not work well. We show that this is actually the case; our method outperforms the simple application of Lewis et al. (2016) in a large margin, 10.0 points in terms of clause dependency accuracy. 2 Background Our work is built on A* CCG parsing (Section 2.1), which we extend in Section 3 with a head prediction model on bi-LSTMs (Section 2.2). 2.1 Supertag-factored A* CCG Parsing CCG has a nice property that since every category is highly informative about attachment decisions, assigning it to every word (supertagging) resolves most of its syntactic structure. Lewis and Steedman (2014) utilize this characteristics of the grammar. Let a CCG tree y be a list of categories ⟨c1, . . . , cN⟩and a derivation on it. Their model looks for the most probable y given a sentence x of length N from the set Y (x) of possible CCG trees under the model of Eq. 1: ˆy = arg max y∈Y (x) ∑ i∈[1,N] log Ptag(ci|x). Since this score is factored into each supertag, they call the model a supertag-factored model. Exact inference of this problem is possible by A* parsing (Klein and D. Manning, 2003), which uses the following two scores on a chart: b(Ci,j) = ∑ ck∈ci,j log Ptag(ck|x), a(Ci,j) = ∑ k∈[1,N]\[i,j] max ck log Ptag(ck|x), where Ci,j is a chart item called an edge, which abstracts parses spanning interval [i, j] rooted by category C. The chart maps each edge to the derivation with the highest score, i.e., the Viterbi parse for Ci,j. ci,j is the sequence of categories on such Viterbi parse, and thus b is called the Viterbi inside score, while a is the approximation (upper bound) of the Viterbi outside score. A* parsing is a kind of CKY chart parsing augmented with an agenda, a priority queue that keeps the edges to be explored. At every step it pops the edge e with the highest priority b(e) + a(e) and inserts that into the chart, and enqueue any edges that can be built by combining e with other edges in the chart. The algorithm terminates when an edge C1,N is popped from the agenda. A* search for this model is quite efficient because both b and a can be obtained from the unigram category distribution on every word, which can be precomputed before search. The heuristics a gives an upper bound on the true Viterbi outside score (i.e., admissible). Along with this the condition that the inside score never increases by expansion (monotonicity) guarantees that the first found derivation on C1,N is always optimal. a(Ci,j) matches the true outside score if the onebest category assignments on the outside words (arg maxck log Ptag(ck|x)) can comprise a wellformed tree with Ci,j, which is generally not true. Scoring model For modeling Ptag, Lewis and Steedman (2014) use a log-linear model with features from a fixed window context. Lewis et al. (2016) extend this with bi-LSTMs, which encode the complete sentence and capture the long range syntactic information. We base our model on this bi-LSTM architecture, and extend it to modeling a head word at the same time. Attachment ambiguity In A* search, an edge with the highest priority b + a is searched first, but as shown in Figure 1 the same categories (with the same priority) may sometimes derive more than 278 one tree. In Lewis and Steedman (2014), they prioritize the parse with longer dependencies, which they judge with a conversion rule from a CCG tree to a dependency tree (Section 4). Lewis et al. (2016) employ another heuristics prioritizing low attachments of constituencies, but inevitably these heuristics cannot be flawless in any situations. We provide a simple solution to this problem by explicitly modeling bilexical dependencies. 2.2 Bi-LSTM Dependency Parsing For modeling dependencies, we borrow the idea from the recent graph-based neural dependency parsing (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2016) in which each dependency arc is scored directly on the outputs of bi-LSTMs. Though the model is first-order, bi-LSTMs enable conditioning on the entire sentence and lead to the state-of-the-art performance. Note that this mechanism is similar to modeling of the supertag distribution discussed above, in that for each word the distribution of the head choice is unigram and can be precomputed. As we will see this keeps our joint model still locally factored and A* search tractable. For score calculation, we use an extended bilinear transformation by Dozat and Manning (2016) that models the prior headness of each token as well, which they call biaffine. 3 Proposed Method 3.1 A* parsing with Supertag and Dependency Factored Model We define a CCG tree y for a sentence x = ⟨xi, . . . , xN⟩as a triplet of a list of CCG categories c = ⟨c1, . . . , cN⟩, dependencies h = ⟨h1, . . . , hN⟩, and the derivation, where hi is the head index of xi. Our model is defined as follows: P(y|x) = ∏ i∈[1,N] Ptag(ci|x) ∏ i∈[1,N] Pdep(hi|x). (2) The added term Pdep is a unigram distribution of the head choice. A* search is still tractable under this model. The search problem is changed as: ˆy = arg max y∈Y (x) ( ∑ i∈[1,N] log Ptag(ci|x) + ∑ i∈[1,N] log Pdep(hi|x) ) , John met NP S\NP/NPNP Mary b(e 2) b(e 1) b(e 3) = b(e 1) + b(e 2) + logPdep(met → John) NP S\NP/NP NP John saw Mary NP S\NP S Figure 2: Viterbi inside score for edge e3 under our model is the sum of those of e1 and e2 and the score of dependency arc going from the head of e2 to that of e1 (the head direction changes according to the child categories). and the inside score is given by: b(Ci,j) = ∑ ck∈ci,j log Ptag(ck|x) (3) + ∑ k∈[i,j]\{root(hC i,j)} log Pdep(hk|x), where hC i,j is a dependency subtree for the Viterbi parse on Ci,j and root(h) returns the root index. We exclude the head score for the subtree root token since it cannot be resolved inside [i, j]. This causes the mismatch between the goal inside score b(C1,N) and the true model score (log of Eq. 2), which we adjust by adding a special unary rule that is always applied to the popped goal edge C1,N. We can calculate the dependency terms in Eq. 3 on the fly when expanding the chart. Let the currently popped edge be Ai,k, which will be combined with Bk,j into Ci,j. The key observation is that only one dependency arc (between root(hA i,k) and root(hB k,j)) is resolved at every combination (see Figure 2). For every rule C →A B we can define the head direction (see Section 4) and Pdep is obtained accordingly. For example, when the right child B becomes the head, b(Ci,j) = b(Ai,k) + b(Bk,j) + log Pdep(hl = m|x), where l = root(hA i,k) and m = root(hB k,j) (l < m). The Viterbi outside score is changed as: a(Ci,j) = ∑ k∈[1,N]\[i,j] max ck log Ptag(ck|x) + ∑ k∈L max hk log Pdep(hk|x), where L = [1, N] \ [k′|k′ ∈[i, j], root(hC i,j) ̸= k′]. We regard root(hC i,j) as an outside word since its head is undefined yet. For every outside word we independently assign the weight of its argmax 279 head, which may not comprise a well-formed dependency tree. We initialize the agenda by adding an item for every supertag C and word xi with the score a(Ci,i) = ∑ k∈I\{i} max log Ptag(ck|x) + ∑ k∈I max log Pdep(hk|x). Note that the dependency component of it is the same for every word. 3.2 Network Architecture Following Lewis et al. (2016) and Dozat and Manning (2016), we model Ptag and Pdep using biLSTMs for exploiting the entire sentence to capture the long range phenomena. See Figure 3 for the overall network architecture, where Ptag and Pdep share the common bi-LSTM hidden vectors. First we map every word xi to their hidden vector ri with bi-LSTMs. The input to the LSTMs is word embeddings, which we describe in Section 6. We add special start and end tokens to each sentence with the trainable parameters following Lewis et al. (2016). For Pdep, we use the biaffine transformation in Dozat and Manning (2016): gdep i = MLP dep child(ri), gdep hi = MLP dep head(rhi), Pdep(hi|x) (4) ∝exp((gdep i )TWdepgdep hi + wdepgdep hi ), where MLP is a multilayered perceptron. Though Lewis et al. (2016) simply use an MLP for mapping ri to Ptag, we additionally utilize the hidden vector of the most probable head hi = arg maxh′ i Pdep(h′ i|x), and apply ri and rhi to a bilinear function:2 gtag i = MLP tag child(ri), gtag hi = MLP tag head(rhi), (5) ℓ= (gtag i )TUtaggtag hi + Wtag [ gtag i gtag hi ] + btag, Ptag(ci|x) ∝exp(ℓc), where Utag is a third order tensor. As in Lewis et al. these values can be precomputed before search, which makes our A* parsing quite efficient. 4 CCG to Dependency Conversion Now we describe our conversion rules from a CCG tree to a dependency one, which we use in two pur2 This is inspired by the formulation of label prediction in Dozat and Manning (2016), which performs the best among other settings that remove or reverse the dependence between the head model and the supertag model. LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM concat concat concat concat x1 x2 x3 x4 Bilinear Biaffine S NP S/S .. .. x1 x 2 x3 .. .. r 1 r 2 r 3 r 4 Pdep Ptag Figure 3: Neural networks of our supertag and dependency factored model. First we map every word xi to a hidden vector ri by bi-LSTMs, and then apply biaffine (Eq. 4) and bilinear (Eq. 5) transformations to obtain the distributions of dependency heads (Pdep) and supertags (Ptag). poses: 1) creation of the training data for the dependency component of our model; and 2) extraction of a dependency arc at each combinatory rule during A* search (Section 3.1). Lewis and Steedman (2014) describe one way to extract dependencies from a CCG tree (LEWISRULE). Below in addition to this we describe two simpler alternatives (HEADFIRST and HEADFINAL), and see the effects on parsing performance in our experiments (Section 6). See Figure 4 for the overview. LEWISRULE This is the same as the conversion rule in Lewis and Steedman (2014). As shown in Figure 4c the output looks a familiar English dependency tree. For forward application and (generalized) forward composition, we define the head to be the left argument of the combinatory rule, unless it matches either X/X or X/(X\Y ), in which case the right argument is the head. For example, on “Black Monday” in Figure 4a we choose Monday as the head of Black. For the backward rules, the conversions are defined as the reverse of the corresponding forward rules. For other rules, RemovePunctuation (rp) chooses the non punctuation argument as the head, while Conjunction (Φ) chooses the right argument.3 3When applying LEWISRULE to Japanese, we ignore the feature values in determining the head argument, which we find often leads to a more natural dependency structure. For example, in “tabe ta” (eat PAST), the category of auxiliary verb “ta” is Sf1\Sf2 with f1 ̸= f2, and thus Sf1 ̸= Sf2. We choose “tabe” as the head in this case by removing the feature values, which makes the category X\X. 280 No , it was n′t Black Monday . S/S , NP (S\NP)/NP (S\NP )\(S\NP ) NP/NP NP . <B× > (S\NP)/NP NP > S\NP < S rp S > S rp S (a) English sentence I SUB English ACC speak want . Boku wa eigo wo hanasi tai . NP NP\NP NP NP\NP (S\NP)\NP S\S S\S < < <B2 NP NP (S\NP)\NP < S\NP < S < S (b) Japanese sentence “I want to speak English.” No , it was n’t Black Monday . (c) LEWISRULE No , it was n’t Black Monday . (d) HEADFIRST Boku wa eigo wo hanasi tai . (e) HEADFINAL Figure 4: Examples of applying conversion rules in Section 4 to English and Japanese sentences. One issue when applying this method for obtaining the training data is that due to the mismatch between the rule set of our CCG parser, for which we follow Lewis and Steedman (2014), and the grammar in English CCGbank (Hockenmaier and Steedman, 2007) we cannot extract dependencies from some of annotated CCG trees.4 For this reason, we instead obtain the training data for this method from the original dependency annotations on CCGbank. Fortunately the dependency annotations of CCGbank matches LEWISRULE above in most cases and thus they can be a good approximation to it. HEADFINAL Among SOV languages, Japanese is known as a strictly head final language, meaning that the head of every word always follows it. Japanese dependency parsing (Uchimoto et al., 1999; Kudo and Matsumoto, 2002) has exploited this property explicitly by only allowing left-toright dependency arcs. Inspired by this tradition, we try a simple HEADFINAL rule in Japanese CCG parsing, in which we always select the right argument as the head. For example we obtain the head final dependency tree in Figure 4e from the Japanese CCG tree in Figure 4b. HEADFIRST We apply the similar idea as HEADFINAL into English. Since English has the opposite, SVO word order, we define the simple “head first” rule, in which the left argument always becomes the head (Figure 4d). 4 For example, the combinatory rules in Lewis and Steedman (2014) do not contain Nconj →N N in CCGbank. Another difficulty is that in English CCGbank the name of each combinatory rule is not annotated explicitly. Though this conversion may look odd at first sight it also has some advantages over LEWISRULE. First, since the model with LEWISRULE is trained on the CCGbank dependencies, at inference, occasionally the two components Pdep and Ptag cause some conflicts on their predictions. For example, the true Viterbi parse may have a lower score in terms of dependencies, in which case the parser slows down and may degrade the accuracy. HEADFIRST, in contract, does not suffer from such conflicts. Second, by fixing the direction of arcs, the prediction of heads becomes easier, meaning that the dependency predictions become more reliable. Later we show that this is in fact the case for existing dependency parsers (see Section 5), and in practice, we find that this simple conversion rule leads to the higher parsing scores than LEWISRULE on English (Section 6). 5 Tri-training We extend the existing tri-training method to our models and apply it to our English parsers. Tri-training is one of the semi-supervised methods, in which the outputs of two parsers on unlabeled data are intersected to create (silver) new training data. This method is successfully applied to dependency parsing (Weiss et al., 2015) and CCG supertagging (Lewis et al., 2016). We simply combine the two previous approaches. Lewis et al. (2016) obtain their silver data annotated with the high quality supertags. Since they make this data publicly available 5, we obtain our silver data by assigning dependency 5https://github.com/uwnlp/taggerflow 281 structures on top of them.6 We train two very different dependency parsers from the training data extracted from CCGbank Section 02-21. This training data differs depending on our dependency conversion strategies (Section 4). For LEWISRULE, we extract the original dependency annotations of CCGbank. For HEADFIRST, we extract the head first dependencies from the CCG trees. Note that we cannot annotate dependency labels so we assign a dummy “none” label to every arc. The first parser is graph-based RBGParser (Lei et al., 2014) with the default settings except that we train an unlabeled parser and use word embeddings of Turian et al. (2010). The second parser is transition-based lstm-parser (Dyer et al., 2015) with the default parameters. On the development set (Section 00), with LEWISRULE dependencies RBGParser shows 93.8% unlabeled attachment score while that of lstm-parser is 92.5% using gold POS tags. Interestingly, the parsers with HEADFIRST dependencies achieve higher scores: 94.9% by RBGParser and 94.6% by lstm-parser, suggesting that HEADFIRST dependencies are easier to parse. For both dependencies, we obtain more than 1.7 million sentences on which two parsers agree. Following Lewis et al. (2016), we include 15 copies of CCGbank training set when using these silver data. Also to make effects of the tri-train samples smaller we multiply their loss by 0.4. 6 Experiments We perform experiments on English and Japanese CCGbanks. 6.1 English Experimental Settings We follow the standard data splits and use Sections 02-21 for training, Section 00 for development, and Section 23 for final evaluation. We report labeled and unlabeled F1 of the extracted CCG semantic dependencies obtained using generate program supplied with C&C parser. For our models, we adopt the pruning strategies in Lewis and Steedman (2014) and allow at most 50 categories per word, use a variable-width beam with β = 0.00001, and utilize a tag dictionary, which maps frequent words to the possible 6We annotate POS tags on this data using Stanford POS tagger (Toutanova et al., 2003). supertags7. Unless otherwise stated, we only allow normal form parses (Eisner, 1996; Hockenmaier and Bisk, 2010), choosing the same subset of the constraints as Lewis and Steedman (2014). We use as word representation the concatenation of word vectors initialized to GloVe8 (Pennington et al., 2014), and randomly initialized prefix and suffix vectors of the length 1 to 4, which is inspired by Lewis et al. (2016). All affixes appearing less than two times in the training data are mapped to “UNK”. Other model configurations are: 4-layer biLSTMs with left and right 300-dimensional LSTMs, 1-layer 100-dimensional MLPs with ELU non-linearity (Clevert et al., 2015) for all MLP dep child, MLP dep head, MLP tag child and MLP tag head, and the Adam optimizer with β1 = 0.9, β2 = 0.9, L2 norm (1e−6), and learning rate decay with the ratio 0.75 for every 2,500 iteration starting from 2e−3, which is shown to be effective for training the biaffine parser (Dozat and Manning, 2016). 6.2 Japanese Experimental Settings We follow the default train/dev/test splits of Japanese CCGbank (Uematsu et al., 2013). For the baselines, we use an existing shift-reduce CCG parser implemented in an NLP tool Jigg9 (Noji and Miyao, 2016), and our implementation of the supertag-factored model using bi-LSTMs. For Japanese, we use as word representation the concatenation of word vectors initialized to Japanese Wikipedia Entity Vector10, and 100dimensional vectors computed from randomly initialized 50-dimensional character embeddings through convolution (dos Santos and Zadrozny, 2014). We do not use affix vectors as affixes are less informative in Japanese. All characters appearing less than two times are mapped to “UNK”. We use the same parameter settings as English for bi-LSTMs, MLPs, and optimization. One issue in Japanese experiments is evaluation. The Japanese CCGbank is encoded in a different format than the English bank, and no standalone script for extracting semantic dependencies is available yet. For this reason, we evaluate the parser outputs by converting them to bunsetsu 7We use the same tag dictionary provided with their biLSTM model. 8http://nlp.stanford.edu/projects/ glove/ 9https://github.com/mynlp/jigg 10http://www.cl.ecei.tohoku.ac.jp/ ˜m-suzuki/jawiki_vector/ 282 Method Labeled Unlabeled CCGbank LEWISRULE w/o dep 85.8 91.7 LEWISRULE 86.0 92.5 HEADFIRST w/o dep 85.6 91.6 HEADFIRST 86.6 92.8 Tri-training LEWISRULE 86.9 93.0 HEADFIRST 87.6 93.3 Table 1: Parsing results (F1) on English development set. “w/o dep” means that the model discards dependency components at prediction. Method Labeled Unlabeled # violations CCGbank LEWISRULE w/o dep 85.8 91.7 2732 LEWISRULE 85.4 92.2 283 HEADFIRST w/o dep 85.6 91.6 2773 HEADFIRST 86.8 93.0 89 Tri-training LEWISRULE 86.7 92.8 253 HEADFIRST 87.7 93.5 66 Table 2: Parsing results (F1) on English development set when excluding the normal form constraints. # violations is the number of combinations violating the constraints on the outputs. dependencies, the syntactic representation ordinary used in Japanese NLP (Kudo and Matsumoto, 2002). Given a CCG tree, we obtain this by first segment a sentence into bunsetsu (chunks) using CaboCha11 and extract dependencies that cross a bunsetsu boundary after obtaining the word-level, head final dependencies as in Figure 4b. For example, the sentence in Figure 4e is segmented as “Boku wa | eigo wo | hanashi tai”, from which we extract two dependencies (Boku wa) ←(hanashi tai) and (eigo wo) ←(hanashi tai). We perform this conversion for both gold and output CCG trees and calculate the (unlabeled) attachment accuracy. Though this is imperfect, it can detect important parse errors such as attachment errors and thus can be a good proxy for the performance as a CCG parser. 6.3 English Parsing Results Effect of Dependency We first see how the dependency components added in our model affect the performance. Table 1 shows the results on the development set with the several configurations, in which “w/o dep” means discarding the depen11http://taku910.github.io/cabocha/ Method Labeled Unlabeled CCGbank C&C (Clark and Curran, 2007) 85.5 91.7 w/ LSTMs (Vaswani et al., 2016) 88.3 EasySRL (Lewis et al., 2016) 87.2 EasySRL reimpl 86.8 92.3 HEADFIRST w/o NF (Ours) 87.7 93.4 Tri-training EasySRL (Lewis et al., 2016) 88.0 92.9 neuralccg (Lee et al., 2016) 88.7 93.7 HEADFIRST w/o NF (Ours) 88.8 94.0 Table 3: Parsing results (F1) on English test set (Section 23). dency terms of the model and applying the attach low heuristics (Section 1) instead (i.e., a supertagfactored model; Section 2.1). We can see that for both LEWISRULE and HEADFIRST, adding dependency terms improves the performance. Choice of Dependency Conversion Rule To our surprise, our simple HEADFIRST strategy always leads to better results than the linguistically motivated LEWISRULE. The absolute improvements by tri-training are equally large (about 1.0 points), suggesting that our model with dependencies can also benefit from the silver data. Excluding Normal Form Constraints One advantage of HEADFIRST is that the direction of arcs is always right, making the structures simpler and more parsable (Section 5). From another viewpoint, this fixed direction means that the constituent structure behind a (head first) dependency tree is unique. Since the constituent structures of CCGbank trees basically follow the normal form (NF), we hypothesize that the model learned with HEADFIRST has an ability to force the outputs in NF automatically. We summarize the results without the NF constraints in Table 2, which shows that the above argument is correct; the number of violating NF rules on the outputs of HEADFIRST is much smaller than that of LEWISRULE (89 vs. 283). Interestingly the scores of HEADFIRST slightly increase from the models with NF (e.g., 86.8 vs. 86.6 for CCGbank), suggesting that the NF constraints hinder the search of HEADFIRST models occasionally. Results on Test Set Parsing results on the test set (Section 23) are shown in Table 3, where we compare our best performing HEADFIRST dependency model without NF constraints with the several existing parsers. In the CCGbank experi283 EasySRL reimpl neuralccg Ours Tagging 24.8 21.7 16.6 A* Search 185.2 16.7 114.6 Total 21.9 9.33 14.5 Table 4: Results of the efficiency experiment, where each number is the number of sentences processed per second. We compare our proposed parser against neuralccg and our reimplementation of EasySRL. ment, our parser shows the better result than all the baseline parsers except C&C with an LSTM supertagger (Vaswani et al., 2016). Our parser outperforms EasySRL by 0.5% and our reimplementation of that parser (EasySRL reimpl) by 0.9% in terms of labeled F1. In the tri-training experiment, our parser shows much increased performance of 88.8% labeled F1 and 94.0% unlabeled F1, outperforming the current state-of-theart neuralccg (Lee et al., 2016) that uses recursive neural networks by 0.1 point and 0.3 point in terms of labeled and unlabeled F1. This is the best reported F1 in English CCG parsing. Efficiency Comparison We compare the efficiency of our parser with neuralccg and EasySRL reimpl.12 The results are shown in Table 4. For the overall speed (the third row), our parser is faster than neuralccg although lags behind EasySRL reimpl. Inspecting the details, our supertagger runs slower than those of neuralccg and EasySRL reimpl, while in A* search our parser processes over 7 times more sentences than neuralccg. The delay in supertagging can be attributed to several factors, in particular the differences in network architectures including the number of biLSTM layers (4 vs. 2) and the use of bilinear transformation instead of linear one. There are also many implementation differences in our parser (C++ A* parser with neural network model implemented with Chainer (Tokui et al., 2015)) and neuralccg (Java parser with C++ TensorFlow (Abadi et al., 2015) supertagger and recursive neural model in C++ DyNet (Neubig et al., 2017)). 6.4 Japanese Parsing Result We show the results of the Japanese parsing experiment in Table 5. The simple application of Lewis 12This experiment is performed on a laptop with 4-thread 2.0 GHz CPU. Method Category Bunsetsu Dep. Noji and Miyao (2016) 93.0 87.5 Supertag model 93.7 81.5 LEWISRULE (Ours) 93.8 90.8 HEADFINAL (Ours) 94.1 91.5 Table 5: Results of Japanese CCGbank. Yesterday buy−PAST curry−ACC eat−PAST Kinoo kat −ta karee −wo tabe −ta S/S S NP S\NP > S un NP/NP > NP < S Yesterday buy−PAST curry−ACC eat−PAST Kinoo kat −ta karee −wo tabe −ta S/S S NP S\NP un NP/NP > NP < S > S Figure 5: Examples of ambiguous Japanese sentence given fixed supertags. The English translation is “I ate the curry I bought yesterday”. et al. (2016) (Supertag model) is not effective for Japanese, showing the lowest attachment score of 81.5%. We observe a performance boost with our method, especially with HEADFINAL dependencies, which outperforms the baseline shift-reduce parser by 1.1 points on category assignments and 4.0 points on bunsetsu dependencies. The degraded results of the simple application of the supertag-factored model can be attributed to the fact that the structure of a Japanese sentence is still highly ambiguous given the supertags (Figure 5). This is particularly the case in constructions where phrasal adverbial/adnominal modifiers (with the supertag S/S) are involved. The result suggests the importance of modeling dependencies in some languages, at least Japanese. 7 Related Work There is some past work that utilizes dependencies in lexicalized grammar parsing, which we review briefly here. For Head-driven Phrase Structure Grammar (HPSG; Pollard and Sag (1994)), there are studies to use the predicted dependency structure to improve HPSG parsing accuracy. Sagae et al. (2007) use dependencies to constrain the form of the output tree. As in our method, for every rule (schema) application they define which child becomes the head and impose a soft constraint that these dependencies agree with the output of the dependency parser. Our method is different 284 in that we do not use the one-best dependency structure alone, but rather we search for a CCG tree that is optimal in terms of dependencies and CCG supertags. Zhang et al. (2010) use the syntactic dependencies in a different way, and show that dependency-based features are useful for predicting HPSG supertags. In the CCG parsing literature, some work optimizes a dependency model, instead of supertags or a derivation (Clark and Curran, 2007; Xu et al., 2014). This approach is reasonable given that the objective matches the evaluation metric. Instead of modeling dependencies alone, our method finds a CCG derivation that has a higher dependency score. Lewis et al. (2015) present a joint model of CCG parsing and semantic role labeling (SRL), which is closely related to our approach. They map each CCG semantic dependency to an SRL relation, for which they give the A* upper bound by the score from a predicate to the most probable argument. Our approach is similar; the largest difference is that we instead model syntactic dependencies from each token to its head, and this is the key to our success. Since dependency parsing can be formulated as independent head selections similar to tagging, we can build the entire model on LSTMs to exploit features from the whole sentence. This formulation is not straightforward in the case of multi-headed semantic dependencies in their model. 8 Conclusion We have presented a new A* CCG parsing method, in which the probability of a CCG tree is decomposed into local factors of the CCG categories and its dependency structure. By explicitly modeling the dependency structure, we do not require any deterministic heuristics to resolve attachment ambiguities, and keep the model locally factored so that all the probabilities can be precomputed before running the search. Our parser efficiently finds the optimal parse and achieves the state-of-the-art performance in both English and Japanese parsing. Acknowledgments We are grateful to Mike Lewis for answering our questions and your Github repository from which we learned many things. We also thank Yuichiro Sawai for the faster LSTM implementation. This work was in part supported by JSPS KAKENHI Grant Number 16H06981, and also by JST CREST Grant Number JPMJCR1301. References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: LargeScale Machine Learning on Heterogeneous Systems. Software available from tensorflow.org. http://tensorflow.org/. Srinivas Bangalore and Aravind K Joshi. 1999. Supertagging: An Approach to Almost Parsing. Computational linguistics 25(2):237–265. Stephen Clark and James R. Curran. 2007. WideCoverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, Volume 33, Number 4, December 2007 http://aclweb.org/anthology/J07-4004. Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). CoRR abs/1511.07289. http://arxiv.org/abs/1511.07289. C´ıcero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning Character-level Representations for Part-of-Speech Tagging. ICML. Timothy Dozat and Christopher D. Manning. 2016. Deep Biaffine Attention for Neural Dependency Parsing. CoRR abs/1611.01734. http://arxiv.org/abs/1611.01734. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and A. Noah Smith. 2015. TransitionBased Dependency Parsing with Stack Long ShortTerm Memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 334–343. https://doi.org/10.3115/v1/P151033. Jason Eisner. 1996. Efficient Normal-Form Parsing for Combinatory Categorial Grammar. In 34th Annual Meeting of the Association for Computational Linguistics. http://aclweb.org/anthology/P96-1011. 285 Julia Hockenmaier and Yonatan Bisk. 2010. Normalform parsing for Combinatory Categorial Grammars with generalized composition and type-raising. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Coling 2010 Organizing Committee, pages 465–473. http://aclweb.org/anthology/C10-1053. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank. Computational Linguistics 33(3):355–396. http://www.aclweb.org/anthology/J07-3004. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations. Transactions of the Association for Computational Linguistics 4:313–327. https://www.transacl.org/ojs/index.php/tacl/article/view/885. Dan Klein and Christopher D. Manning. 2003. A* Parsing: Fast Exact Viterbi Parse Selection. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. http://aclweb.org/anthology/N03-1016. Taku Kudo and Yuji Matsumoto. 2002. Japanese Dependency Analysis using Cascaded Chunking. In Proceedings of the 6th Conference on Natural Language Learning, CoNLL 2002, Held in cooperation with COLING 2002, Taipei, Taiwan, 2002. http://aclweb.org/anthology/W/W02/W022016.pdf. Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2016. Global Neural CCG Parsing with Optimality Guarantees. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2366–2376. http://aclweb.org/anthology/D16-1262. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-Rank Tensors for Scoring Dependency Structures. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1381–1391. https://doi.org/10.3115/v1/P141130. Mike Lewis, Luheng He, and Luke Zettlemoyer. 2015. Joint A* CCG Parsing and Semantic Role Labelling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1444– 1454. https://doi.org/10.18653/v1/D15-1169. Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. LSTM CCG Parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 221–231. https://doi.org/10.18653/v1/N16-1026. Mike Lewis and Mark Steedman. 2014. A* CCG Parsing with a Supertag-factored Model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 990– 1000. https://doi.org/10.3115/v1/D14-1107. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. DyNet: The Dynamic Neural Network Toolkit. arXiv preprint arXiv:1701.03980 . Hiroshi Noji and Yusuke Miyao. 2016. Jigg: A Framework for an Easy Natural Language Processing Pipeline. In Proceedings of ACL2016 System Demonstrations. Association for Computational Linguistics, pages 103–108. https://doi.org/10.18653/v1/P16-4018. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532– 1543. http://www.aclweb.org/anthology/D14-1162. Carl Pollard and Ivan A Sag. 1994. Head-driven phrase structure grammar. University of Chicago Press. Kenji Sagae, Yusuke Miyao, and Jun’ichi Tsujii. 2007. HPSG Parsing with Shallow Dependency Constraints. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Computational Linguistics, pages 624–631. http://aclweb.org/anthology/P071079. Mark Steedman. 2000. The Syntactic Process. The MIT Press. Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: a Next-Generation Open Source Framework for Deep Learning. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS). http://learningsys.org/papers/LearningSys 2015 paper 33.pdf. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-Rich Partof-Speech Tagging with a Cyclic Dependency Network. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational 286 Linguistics. http://www.aclweb.org/anthology/N031033. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word Representations: A Simple and General Method for Semi-Supervised Learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 384–394. http://aclweb.org/anthology/P10-1040. Kiyotaka Uchimoto, Satoshi Sekine, and Hitoshi Isahara. 1999. Japanese Dependency Structure Analysis Based on Maximum Entropy Models. In Ninth Conference of the European Chapter of the Association for Computational Linguistics. http://aclweb.org/anthology/E99-1026. Sumire Uematsu, Takuya Matsuzaki, Hiroki Hanaoka, Yusuke Miyao, and Hideki Mima. 2013. Integrating Multiple Dependency Corpora for Inducing Wide-coverage Japanese CCG Resources. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 1042– 1051. http://www.aclweb.org/anthology/P13-1103. Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging With LSTMs. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 232– 237. https://doi.org/10.18653/v1/N16-1027. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured Training for Neural Network Transition-Based Parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 323–333. https://doi.org/10.3115/v1/P15-1032. Wenduan Xu, Stephen Clark, and Yue Zhang. 2014. Shift-Reduce CCG Parsing with a Dependency Model. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 218–227. https://doi.org/10.3115/v1/P14-1021. Yao-zhong Zhang, Takuya Matsuzaki, and Jun’ichi Tsujii. 2010. A Simple Approach for HPSG Supertagging Using Dependency Information. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 645–648. http://aclweb.org/anthology/N10-1090. 287
2017
26
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 288–298 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1027 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 288–298 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1027 A Full Non-Monotonic Transition System for Unrestricted Non-Projective Parsing Daniel Fern´andez-Gonz´alez and Carlos G´omez-Rodr´ıguez Universidade da Coru˜na FASTPARSE Lab, LyS Research Group, Departamento de Computaci´on Campus de Elvi˜na, s/n, 15071 A Coru˜na, Spain [email protected], [email protected] Abstract Restricted non-monotonicity has been shown beneficial for the projective arceager dependency parser in previous research, as posterior decisions can repair mistakes made in previous states due to the lack of information. In this paper, we propose a novel, fully non-monotonic transition system based on the non-projective Covington algorithm. As a non-monotonic system requires exploration of erroneous actions during the training process, we develop several non-monotonic variants of the recently defined dynamic oracle for the Covington parser, based on tight approximations of the loss. Experiments on datasets from the CoNLL-X and CoNLL-XI shared tasks show that a non-monotonic dynamic oracle outperforms the monotonic version in the majority of languages. 1 Introduction Greedy transition-based dependency parsers are widely used in different NLP tasks due to their speed and efficiency. They parse a sentence from left to right by greedily choosing the highestscoring transition to go from the current parser configuration or state to the next. The resulting sequence of transitions incrementally builds a parse for the input sentence. The scoring of the transitions is provided by a statistical model, previously trained to approximate an oracle, a function that selects the needed transitions to parse a gold tree. Unfortunately, the greedy nature that grants these parsers their efficiency also represents their main limitation. McDonald and Nivre (2007) show that greedy transition-based parsers lose accuracy to error propagation: a transition erroneously chosen by the greedy parser can place it in an incorrect and unknown configuration, causing more mistakes in the rest of the transition sequence. Training with a dynamic oracle (Goldberg and Nivre, 2012) improves robustness in these situations, but in a monotonic transition system, erroneous decisions made in the past are permanent, even when the availability of further information in later states might be useful to correct them. Honnibal et al. (2013) show that allowing some degree of non-monotonicity, by using a limited set of non-monotonic actions that can repair past mistakes and replace previously-built arcs, can increase the accuracy of a transition-based parser. In particular, they present a modified arc-eager transition system where the Left-Arc and Reduce transitions are non-monotonic: the former is used to repair invalid attachments made in previous states by replacing them with a leftward arc, and the latter allows the parser to link two words with a rightward arc that were previously left unattached due to an erroneous decision. Since the Right-Arc transition is still monotonic and leftward arcs can never be repaired because their dependent is removed from the stack by the arc-eager parser and rendered inaccessible, this approach can only repair certain kinds of mistakes: namely, it can fix erroneous rightward arcs by replacing them with a leftward arc, and connect a limited set of unattached words with rightward arcs. In addition, they argue that non-monotonicity in the training oracle can be harmful for the final accuracy and, therefore, they suggest to apply it only as a fallback component for a monotonic oracle, which is given priority over the non-monotonic one. Thus, this strategy will follow the path dictated by the monotonic oracle the majority of the time. Honnibal and Johnson (2015) present an extension of this transition system with an Unshift transition allowing it some extra flexibility to correct past errors. However, the restriction that only rightward 288 arcs can be deleted, and only by replacing them with leftward arcs, is still in place. Furthermore, both versions of the algorithm are limited to projective trees. In this paper, we propose a non-monotonic transition system based on the non-projective Covington parser, together with a dynamic oracle to train it with erroneous examples that will need to be repaired. Unlike the system developed in (Honnibal et al., 2013; Honnibal and Johnson, 2015), we work with full non-monotonicity. This has a twofold meaning: (1) our approach can repair previous erroneous attachments regardless of their original direction, and it can replace them either with a rightward or leftward arc as both arc transitions are non-monotonic;1 and (2) we use exclusively a non-monotonic oracle, without the interferences of monotonic decisions. These modifications are feasible because the non-projective Covington transition system is less rigid than the arc-eager algorithm, as words are never deleted from the parser’s data structures and can always be revisited, making it a better option to exploit the full potencial of non-monotonicity. To our knowledge, the presented system is the first nonmonotonic parser that can produce non-projective dependency analyses. Another novel aspect is that our dynamic oracle is approximate, i.e., based on efficiently-computable approximations of the loss due to the complexity of calculating its actual value in a non-monotonic and non-projective scenario. However, this is not a problem in practice: experimental results show how our parser and oracle can use non-monotonic actions to repair erroneous attachments, outperforming the monotonic version developed by G´omez-Rodr´ıguez and Fern´andez-Gonz´alez (2015) in a large majority of the datasets tested. 2 Preliminaries 2.1 Non-Projective Covington Transition System The non-projective Covington parser was originally defined by Covington (2001), and then recast by Nivre (2008) under the transition-based parsing framework. 1The only restriction is that parsing must still proceed in left-to-right order. For this reason, a leftward arc cannot be repaired with a rightward arc, because this would imply going back in the sentence. The other three combinations (replacing leftward with leftward, rightward with leftward or rightward with rightward arcs) are possible. The transition system that defines this parser is as follows: each parser configuration is of the form c = ⟨λ1, λ2, B, A⟩, such that λ1 and λ2 are lists of partially processed words, B is another list (called the buffer) containing currently unprocessed words, and A is the set of dependencies that have been built so far. Suppose that our input is a string w1 · · · wn, whose word occurrences will be identified with their indices 1 · · · n for simplicity. Then, the parser will start at an initial configuration cs(w1 . . . wn) = ⟨[], [], [1 . . . n], ∅⟩, and execute transitions chosen from those in Figure 1 until a terminal configuration of the form {⟨λ1, λ2, [], A⟩∈C} is reached. At that point, the sentence’s parse tree is obtained from A.2 These transitions implement the same logic as the double nested loop traversing word pairs in the original formulation by Covington (2001). When the parser’s configuration is ⟨λ1|i, λ2, j|B, A⟩, we say that it is considering the focus words i and j, located at the end of the first list and at the beginning of the buffer. At that point, the parser must decide whether these two words should be linked with a leftward arc i ←j (Left-Arc transition), a rightward arc i →j (Right-Arc transition), or not linked at all (No-Arc transition). However, the two transitions that create arcs will be disallowed in configurations where this would cause a violation of the single-head constraint (a node can have at most one incoming arc) or the acyclicity constraint (the dependency graph cannot have cycles). After applying any of these three transitions, i is moved to the second list to make i −1 and j the focus words for the next step. As an alternative, we can instead choose to execute a Shift transition which lets the parser read a new input word, placing the focus on j and j + 1. The resulting parser can generate any possible dependency tree for the input, including arbitrary non-projective trees. While it runs in quadratic worst-case time, in theory worse than lineartime transition-based parsers (e.g. (Nivre, 2003; G´omez-Rodr´ıguez and Nivre, 2013)), it has been shown to outspeed linear algorithms in practice, thanks to feature extraction optimizations that cannot be implemented in other parsers (Volokh and Neumann, 2012). In fact, one of the fastest dependency parsers ever reported uses this algorithm 2In general A is a forest, but it can be converted to a tree by linking headless nodes as dependents of an artificial root node at position 0. When we refer to parser outputs as trees, we assume that this transformation is being implicitly made. 289 Shift: ⟨λ1, λ2, j|B, A⟩⇒⟨λ1 · λ2|j, [], B, A⟩ No-Arc: ⟨λ1|i, λ2, B, A⟩⇒⟨λ1, i|λ2, B, A⟩ Left-Arc: ⟨λ1|i, λ2, j|B, A⟩⇒⟨λ1, i|λ2, j|B, A ∪{j →i}⟩ only if ∄k | k →i ∈A (single-head) and i →∗j ̸∈A (acyclicity). Right-Arc: ⟨λ1|i, λ2, j|B, A⟩⇒⟨λ1, i|λ2, j|B, A ∪{i →j}⟩ only if ∄k | k →j ∈A (single-head) and j →∗i ̸∈A (acyclicity). Figure 1: Transitions of the monotonic Covington non-projective dependency parser. The notation i →∗ j ∈A means that there is a (possibly empty) directed path from i to j in A. (Volokh, 2013). 2.2 Monotonic Dynamic Oracle A dynamic oracle is a function that maps a configuration c and a gold tree tG to the set of transitions that can be applied in c and lead to some parse tree t minimizing the Hamming loss with respect to tG (the amount of nodes whose head is different in t and tG). Following Goldberg and Nivre (2013), we say that an arc set A is reachable from configuration c, and we write c ⇝A, if there is some (possibly empty) path of transitions from c to some configuration c′ = ⟨λ1, λ2, B, A′⟩, with A ⊆A′. Then, we can define the loss of configuration c as ℓ(c) = min t|c⇝t L(t, tG), and therefore, a correct dynamic oracle will return the set of transitions od(c, tG) = {τ | ℓ(c) −ℓ(τ(c)) = 0}, i.e., the set of transitions that do not increase configuration loss, and thus lead to the best parse (in terms of loss) reachable from c. Hence, implementing a dynamic oracle reduces to computing the loss ℓ(c) for each configuration c. Goldberg and Nivre (2013) show a straightforward method to calculate loss for parsers that are arc-decomposable, i.e., those where every arc set A that can be part of a well-formed parse verifies that if c ⇝(i →j) for every i →j ∈A (i.e., each of the individual arcs of A is reachable from a given configuration c), then c ⇝A (i.e., the set A as a whole is reachable from c). If this holds, then the loss of a configuration c equals the number of gold arcs that are not individually reachable from c, which is easy to compute in most parsers. G´omez-Rodr´ıguez and Fern´andez-Gonz´alez (2015) show that the non-projective Covington parser is not arc-decomposable because sets of individually reachable arcs may form cycles together with already-built arcs, preventing them from being jointly reachable due to the acyclicity constraint. In spite of this, they prove that a dynamic oracle for the Covington parser can be efficiently built by counting individually unreachable arcs, and correcting for the presence of such cycles. Concretely, the loss is computed as: ℓ(c) = |U(c, tG)| + nc(A ∪I(c, tG)) where I(c, tG) = {x →y ∈tG | c ⇝(x →y)} is the set of individually reachable arcs of tG from configuration c; U(c, tG) is the set of individually unreachable arcs of tG from c, computed as tG\I(c, tG); and nc(G) denotes the number of cycles in a graph G. Therefore, to calculate the loss of a configuration c, we only need to compute the two terms |U(c, tG)| and nc(A ∪I(c, tG)). To calculate the first term, given a configuration c with focus words i and j (i.e., c = ⟨λ1|i, λ2, j|B, A⟩), an arc x →y will be in U(c, tG) if it is not in A, and at least one of the following holds: • j > max(x, y), (i.e., we have read too far in the string and can no longer get max(x, y) as right focus word), • j = max(x, y) ∧i < min(x, y), (i.e., we have max(x, y) as the right focus word but the left focus word has already moved left past min(x, y), and we cannot go back), • there is some z ̸= 0, z ̸= x such that z → y ∈A, (i.e., we cannot create x →y because it would violate the single-head constraint), • x and y are on the same weakly connected component of A (i.e., we cannot create x → y due to the acyclicity constraint). The second term of the loss, nc(A ∪I(c, tG)), can be computed by first obtaining I(c, tG) as tG \ U(c, tG). Since the graph I(c, tG) has indegree 1, the algorithm by Tarjan (1972) can then be used to find and count the cycles in O(n) time. 290 Algorithm 1 Computation of the loss of a configuration in the monotonic oracle. 1: function LOSS(c = ⟨λ1|i, λ2, j|B, A⟩, tG) 2: U ←∅ ▷Variable U is for U(c, tG) 3: for each x →y ∈(tG \ A) do 4: left ←min(x, y) 5: right ←max(x, y) 6: if j > right ∨ 7: (j = right ∧i < left) ∨ 8: (∃z > 0, z ̸= x : z →y ∈A) ∨ 9: WEAKLYCONNECTED(A, x, y) then 10: U ←u ∪{x →y} 11: I ←tG \ U ▷Variable I is for I(c, tG) 12: return |U | + COUNTCYCLES(A ∪I ) Algorithm 1 shows the resulting loss calculation algorithm, where COUNTCYCLES is a function that counts the number of cycles in the given graph and WEAKLYCONNECTED returns whether two given nodes are weakly connected in A. 3 Non-Monotonic Transition System for the Covington Non-Projective Parser We now define a non-monotonic variant of the Covington non-projective parser. To do so, we allow the Right-Arc and Left-Arc transitions to create arcs between any pair of nodes without restriction. If the node attached as dependent already had a previous head, the existing attachment is discarded in favor of the new one. This allows the parser to correct erroneous attachments made in the past by assigning new heads, while still enforcing the single-head constraint, as only the most recent head assigned to each node is kept. To enforce acyclicity, one possibility would be to keep the logic of the monotonic algorithm, forbidding the creation of arcs that would create cycles. However, this greatly complicates the definition of the set of individually unreachable arcs, which is needed to compute the loss bounds that will be used by the dynamic oracle. This is because a gold arc x →y may superficially seem unreachable due to forming a cycle together with arcs in A, but it might in fact be reachable if there is some transition sequence that first breaks the cycle using non-monotonic transitions to remove arcs from A, to then create x →y. We do not know of a way to characterize the conditions under which such a transition sequence exists, and thus cannot estimate the loss efficiently. Instead, we enforce the acyclicity constraint in a similar way to the single-head constraint: Right-Arc and Left-Arc transitions are always allowed, even if the prospective arc would create a cycle in A. However, if the creation of a new arc x →y generates a cycle in A, we immediately remove the arc of the form z →x from A (which trivially exists, and is unique due to the singlehead constraint). This not only enforces the acyclicity constraint while keeping the computation of U(c, tG) simple and efficient, but also produces a straightforward, coherent algorithm (arc transitions are always allowed, and both constraints are enforced by deleting a previous arc) and allows us to exploit non-monotonicity to the maximum (we can not only recover from assigning a node the wrong head, but also from situations where previous errors together with the acyclicity constraint prevent us from building a gold arc, keeping with the principle that later decisions override earlier ones). In Figure 2, we can see the resulting nonmonotonic transition system for the non-projective Covington algorithm, where, unlike the monotonic version, all transitions are allowed at each configuration, and the single-head and acyclicity constraints are kept in A by removing offending arcs. 4 Non-Monotonic Approximate Dynamic Oracle To successfully train a non-monotonic system, we need a dynamic oracle with error exploration, so that the parser will be put in erroneous states and need to apply non-monotonic transitions in order to repair them. To achieve that, we modify the dynamic oracle defined by G´omez-Rodr´ıguez and Fern´andez-Gonz´alez (2015) so that it can deal with non-monotonicity. Our modification is an approximate dynamic oracle: due to the extra flexibility added to the algorithm by non-monotonicity, we do not know of an efficient way of obtaining an exact calculation of the loss of a given configuration. Instead, we use upper or lower bounds on the loss, which we empirically show to be very tight (less that 1% relative error with respect to the real loss) and are sufficient for the algorithm to provide better accuracy than the exact monotonic oracle. First of all, we adapt the computation of the set of individually unreachable arcs U(c, tG) to the new algorithm. In particular, if c has focus words i and j (i.e., c = ⟨λ1|i, λ2, j|B, A⟩), then an arc x →y is in U(c, tG) if it is not in A, and at least one of the following holds: • j > max(x, y), (i.e., we have read too far in the string and can no longer get max(x, y) as 291 Shift: ⟨λ1, λ2, j|B, A⟩⇒⟨λ1 · λ2|j, [], B, A⟩ No-Arc: ⟨λ1|i, λ2, B, A⟩⇒⟨λ1, i|λ2, B, A⟩ Left-Arc: ⟨λ1|i, λ2, j|B, A⟩⇒⟨λ1, i|λ2, j|B, (A ∪{j →i}) \{x →i ∈A} \ {k →j ∈A | i →∗k ∈A}⟩ Right-Arc: ⟨λ1|i, λ2, j|B, A⟩⇒⟨λ1, i|λ2, j|B, A ∪{i →j} \{x →j ∈A} \ {k →i ∈A | j →∗k ∈A}⟩ Figure 2: Transitions of the non-monotonic Covington non-projective dependency parser. The notation i →∗j ∈A means that there is a (possibly empty) directed path from i to j in A. right focus word), • j = max(x, y) ∧i < min(x, y) (i.e., we have max(x, y) as the right focus word but the left focus word has already moved left past min(x, y), and we cannot move it back). Note that, since the head of a node can change during the parsing process and arcs that produce cycles in A can be built, the two last conditions present in the monotonic scenario for computing U(c, tG) are not needed when we use nonmonotonicity and, as a consequence, the set of individually reachable arcs I(c, tG) is larger: due to the greater flexibility provided by nonmonotonicity, we can reach arcs that would be unreachable for the monotonic version. Since arcs that are in this new U(c, tG) are unreachable even by the non-monotonic parser, |U(c, tG)| is trivially a lower bound of the loss ℓ(c). It is worth noting that there always exists at least one transition sequence that builds every arc in I(c, tG) at some point (although not all of them necessarily appear in the final tree, due to non-monotonicity). This can be easily shown based on the fact that the non-monotonic parser does not forbid transitions at any configuration. Thanks to this, we can can generate one such sequence by just applying the original Covington (2001) criteria (choose an arc transition whenever the focus words are linked in I(c, tG), and otherwise Shift or No-Arc depending on whether the left focus word is the first word in the sentence or not), although this sequence is not necessarily optimal in terms of loss. In such a transition sequence, the gold arcs that are missed are (1) those in U(c, tG), and (2) those that are removed by the cycle-breaking in Left-Arc and Right-Arc transitions. In practice configurations where (2) is needed are uncommon, so this lower bound is a very close approximation of the real loss, as will be seen empirically below. This reasoning also helps us calculate an upper bound of the loss: in a transition sequence as described, if we only build the arcs in I(c, tG) and none else, the amount of arcs removed by breaking cycles (2) cannot be larger than the number of cycles in A ∪I(c, tG). This means that |U(c, tG)|+nc(A∪I(c, tG)) is an upper bound of the loss ℓ(c). Note that, contrary to the monotonic case, this expression does not always give us the exact loss, for several reasons: firstly, A∪I(c, tG) can have non-disjoint cycles (a node may have different heads in A and I since attachments are not permanent, contrary to the monotonic version) and thus removing a single arc may break more than one cycle; secondly, the removed arc can be a non-gold arc of A and therefore not incur loss; and thirdly, there may exist alternative transition sequences where a cycle in A ∪I(c, tG) is broken early by non-monotonic configurations that change the head of a wrongly-attached node in A to a different (and also wrong) head,3 removing the cycle before the cycle-breaking mechanism needs to come into play without incurring in extra errors. Characterizing the situations where such an alternative exists is the main difficulty for an exact calculation of the loss. However, it is possible to obtain a closer upper bound to the real loss if we consider the following: for each cycle in A ∪I(c, tG) that will be broken by the transition sequence described above, we can determine exactly which is the arc removed by cycle-breaking (if x →y is the arc that will close the cycle according to the Covington arc-building order, then the affected arc is the one of the form z →x). The cycle can only cause the loss of a gold arc if that arc z →x is gold, which can be trivially checked. Hence, if we call cycles where that holds problematic cycles, then the expression 3Note that, in this scenario, the new head must also be wrong because otherwise the newly created arc would be an arc of I(c, tG) (and therefore, would not be breaking a cycle in A ∪I(c, tG)). However, replacing a wrong attachment with another wrong attachment need not increase loss. 292 average value relative difference to loss Language lower loss pc upper upper lower pc upper upper Arabic 0.66925 0.67257 0.67312 0.68143 0.00182 0.00029 0.00587 Basque 0.58260 0.58318 0.58389 0.62543 0.00035 0.00038 0.02732 Catalan 0.58009 0.58793 0.58931 0.60644 0.00424 0.00069 0.00961 Chinese 0.56515 0.56711 0.57156 0.62921 0.00121 0.00302 0.03984 Czech 0.57521 0.58357 0.59401 0.62883 0.00476 0.00685 0.02662 English 0.55267 0.56383 0.56884 0.59494 0.00633 0.00294 0.01767 Greek 0.56123 0.57443 0.57983 0.61256 0.00731 0.00296 0.02256 Hungarian 0.46495 0.46672 0.46873 0.48797 0.00097 0.00114 0.01165 Italian 0.62033 0.62612 0.62767 0.64356 0.00307 0.00082 0.00883 Turkish 0.60143 0.60215 0.60660 0.63560 0.00060 0.00329 0.02139 Bulgarian 0.61415 0.62257 0.62433 0.64497 0.00456 0.00086 0.01233 Danish 0.67350 0.67904 0.68119 0.69436 0.00291 0.00108 0.00916 Dutch 0.69201 0.70600 0.71105 0.74008 0.00709 0.00251 0.01862 German 0.54581 0.54755 0.55080 0.58182 0.00104 0.00208 0.02033 Japanese 0.60515 0.60515 0.60515 0.60654 0.00000 0.00000 0.00115 Portuguese 0.58880 0.60063 0.60185 0.61780 0.00651 0.00067 0.00867 Slovene 0.56155 0.56860 0.57135 0.60373 0.00396 0.00153 0.01979 Spanish 0.58247 0.59119 0.59277 0.61273 0.00487 0.00089 0.01197 Swedish 0.57543 0.58636 0.58933 0.61104 0.00585 0.00153 0.01383 Average 0.59009 0.59656 0.59954 0.62416 0.00355 0.00176 0.01513 Table 1: Average value of the different bounds and the loss, and of the relative differences from each bound to the loss, on CoNLL-XI (first block) and CoNLL-X (second block) datasets during 100,000 transitions. For each language, we show in boldface the average value and relative difference of the bound that is closer to the loss. |U(c, tG)| + npc(A ∪I(c, tG)), where “pc” stands for problematic cycles, is a closer upper bound to the loss ℓ(c) and the following holds: |U(c, tG)| ≤ℓ(c) ≤|U(c, tG)|+npc(A∪I(c, tG)) ≤|U(c, tG)| + nc(A ∪I(c, tG)) As mentioned before, unlike the monotonic approach, a node can have a different head in A than in I(c, tG) and, as a consequence, the resulting graph A ∪I(c, tG) has maximum in-degree 2 rather than 1, and there can be overlapping cycles. Therefore, the computation of the non-monotonic terms nc(A ∪I(c, tG)) and npc(A ∪I(c, tG)) requires an algorithm such as the one by Johnson (1975) to find all elementary cycles in a directed graph. This runs in O((n + e)(c + 1)), where n is the number of vertices, e is the number of edges and c is the number of elementary cycles in the graph. This implies that the calculation of the two non-monotonic upper bounds is less efficient than the linear loss computation in the monotonic scenario. However, a non-monotonic algorithm that uses the lower bound as loss expression is the fastest option (even faster than the monotonic approach) as the oracle does not need to compute cycles at all, speeding up the training process. Algorithm 2 shows the non-monotonic variant of Algorithm 1, where COUNTRELEVANTCYCLES is a function that counts the number of cycles or problematic cycles in the given graph, Algorithm 2 Computation of the approximate loss of a non-monotonic configuration. 1: function LOSS(c = ⟨λ1|i, λ2, j|B, A⟩, tG) 2: U ←∅ ▷Variable U is for U(c, tG) 3: for each x →y ∈(tG \ A) do 4: left ←min(x, y) 5: right ←max(x, y) 6: if j > right ∨ 7: (j = right ∧i < left) then 8: U ←u ∪{x →y} 9: I ←tG \ U ▷Variable I is for I(c, tG) 10: return |U | + COUNTRELEVANTCYCLES(A ∪I ) depending on the upper bound implemented, and will return 0 in case we use the lower bound. 5 Evaluation of the Loss Bounds To determine how close the lower bound |U(c, tG)| and the upper bounds |U(c, tG)| + npc(A∪I(c, tG)) and |U(c, tG)|+nc(A∪I(c, tG)) are to the actual loss in practical scenarios, we use exhaustive search to calculate the real loss of a given configuration, to then compare it with the bounds. This is feasible because the lower and upper bounds allow us to prune the search space: if an upper and a lower bound coincide for a configuration we already know the loss and need not keep searching, and if we can branch to two configurations such that the lower bound of one is greater or equal than an upper bound of the other, we can discard the former as it will never lead to smaller loss than the latter. Therefore, this ex293 Unigrams L0w; L0p; L0wp; L0l; L0hw; L0hp; L0hl; L0l′w; L0l′p; L0l′l; L0r′w; L0r′p; L0r′l; L0h2w; L0h2p; L0h2l; L0lw; L0lp; L0ll; L0rw; L0rp; L0rl; L0wd; L0pd; L0wvr; L0pvr; L0wvl; L0pvl; L0wsl; L0psl; L0wsr; L0psr; L1w; L1p; L1wp; R0w; R0p; R0wp; R0hw; R0hp;R0hl; R0h2w; R0h2p; R0l′w; R0l′p; R0l′l; R0lw; R0lp; R0ll; R0wd; R0pd; R0wvl; R0pvl; R0wsl; R0psl; R1w; R1p; R1wp; R2w; R2p; R2wp; CLw; CLp; CLwp; CRw; CRp; CRwp; Pairs L0wp+R0wp; L0wp+R0w; L0w+R0wp; L0wp+R0p; L0p+R0wp; L0w+R0w; L0p+R0p; R0p+R1p; L0w+R0wd; L0p+R0pd; Triples R0p+R1p+R2p; L0p+R0p+R1p; L0hp+L0p+R0p; L0p+L0l′p+R0p; L0p+L0r′p+R0p; L0p+R0p+R0l′p; L0p+L0l′p+L0lp; L0p+L0r′p+L0rp; L0p+L0hp+L0h2p; R0p+R0l′p+R0lp; Table 2: Feature templates. L0 and R0 denote the left and right focus words; L1, L2, . . . are the words to the left of L0 and R1, R2, . . . those to the right of R0. Xih means the head of Xi, Xih2 the grandparent, Xil and Xil′ the farthest and closest left dependents, and Xir and Xir′ the farthest and closest right dependents, respectively. CL and CR are the first and last words between L0 and R0 whose head is not in the interval [L0, R0]. Finally, w stands for word form; p for PoS tag; l for dependency label; d is the distance between L0 and R0; vl, vr are the left/right valencies (number of left/right dependents); and sl, sr the left/right label sets (dependency labels of left/right dependents). haustive search with pruning guarantees to find the exact loss. Due to the time complexity of this process, we undertake the analysis of only the first 100,000 transitions on each dataset of the nineteen languages available from CoNLL-X and CoNLL-XI shared tasks (Buchholz and Marsi, 2006; Nivre et al., 2007). In Table 1, we present the average values for the lower bound, both upper bounds and the loss, as well as the relative differences from each bound to the real loss. After those experiments, we conclude that the lower and the closer upper bounds are a tight approximation of the loss, with both bounds incurring relative errors below 0.8% in all datasets. If we compare them, the real loss is closer to the upper bound |U(c, tG)| + npc(A ∪I(c, tG)) in the majority of datasets (12 out of 18 languages, excluding Japanese where both bounds were exactly equal to the real loss in the whole sample of configurations). This means that the term npc(A∪I(c, tG)) provides a close approximation of the gold arcs missed by the presence of cycles in A. Regarding the upper bound |U(c, tG)|+nc(A∪I(c, tG)), it presents a more variable relative error, ranging from 0.1% to 4.0%. Thus, although we do not know an algorithm to obtain the exact loss which is fast enough to be practical, any of the three studied loss bounds can be used to obtain a feasible approximate dynamic oracle with full non-monotonicity. 6 Experiments To prove the usefulness of our approach, we implement the static, dynamic monotonic and nonmonotonic oracles for the non-projective Covington algorithm and compare their accuracies on nine datasets4 from the CoNLL-X shared task (Buchholz and Marsi, 2006) and all datasets from the CoNLL-XI shared task (Nivre et al., 2007). For the non-monotonic algorithm, we test the three different loss expressions defined in the previous section. We train an averaged perceptron model for 15 iterations and use the same feature templates for all languages5 which are listed in detail in Table 2. 6.1 Results The accuracies obtained by the non-projective Covington parser with the three available oracles are presented in Table 3, in terms of Unlabeled (UAS) and Labeled Attachment Score (LAS). For the non-monotonic dynamic oracle, three variants are shown, one for each loss expression implemented. As we can see, the novel non-monotonic oracle improves over the accuracy of the monotonic version on 14 out of 19 languages (0.32 in UAS on average) with the best loss calculation being |U(c, tG)| + nc(A ∪I(c, tG)), where 6 of these improvements are statistically significant at the .05 level (Yeh, 2000). The other two loss calculation methods also achieve good results, outperforming the monotonic algorithm on 12 out of 19 datasets tested. The loss expression |U(c, tG)| + nc(A ∪ I(c, tG)) obtains greater accuracy on average than the other two loss expressions, including the more adjusted upper bound that is provably closer to the real loss. This could be explained by the fact that 4We excluded the languages from CoNLL-X that also appeared in CoNLL-XI, i.e., if a language was present in both shared tasks, we used the latest version. 5No feature optimization is performed since our priority in this paper is not to compete with state-of-the-art systems, but to prove, under uniform experimental settings, that our approach outperforms the baseline system. 294 dynamic dynamic non-monotonic static monotonic lower pc upper upper Language UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS Arabic 80.67 66.51 82.76∗ 68.48∗ 83.29∗ 69.14∗ 83.18∗ 69.05∗ 83.40† 69.29† Basque 76.55 66.05 77.49† 67.31† 74.61 65.31 74.69 65.18 74.27 64.78 Catalan 90.52 85.09 91.37∗ 85.98∗ 90.51 85.35 90.40 85.30 90.44 85.35 Chinese 84.93 80.80 85.82 82.15 86.55∗ 82.53∗ 86.29∗ 82.27∗ 86.60∗ 82.51∗ Czech 78.49 61.77 80.21∗ 63.52∗ 81.32† 64.89† 81.33† 64.81† 81.49† 65.18† English 85.35 84.29 87.47∗ 86.55∗ 88.44† 87.37† 88.23† 87.22† 88.50† 87.55† Greek 79.47 69.35 80.76 70.43 80.90 70.46 80.84 70.34 81.02∗ 70.49∗ Hungarian 77.65 68.32 78.84∗ 70.16∗ 78.67∗ 69.83∗ 78.47∗ 69.66∗ 78.65∗ 69.74∗ Italian 84.06 79.79 84.30 80.17 84.38 80.30 84.64 80.52 84.47 80.32 Turkish 81.28 70.97 81.14 71.38 80.65 71.15 80.80 71.29 80.60 71.07 Bulgarian 89.13 85.30 90.45∗ 86.86∗ 91.36† 87.88† 91.33† 87.89† 91.73† 88.26† Danish 86.00 81.49 86.91∗ 82.75∗ 86.83∗ 82.63∗ 86.89∗ 82.74∗ 86.94∗ 82.68∗ Dutch 81.54 78.46 82.07 79.26 82.78∗ 79.64∗ 82.80∗ 79.68∗ 83.02† 79.92† German 86.97 83.91 87.95∗ 85.17∗ 87.31 84.37 87.18 84.22 87.48 84.54 Japanese 93.63 92.20 93.67 92.33 94.02 92.68 94.02 92.68 93.97 92.66 Portuguese 86.55 82.61 87.45∗ 83.62∗ 87.17∗ 83.47∗ 87.12∗ 83.45∗ 87.40∗ 83.71∗ Slovene 76.76 63.53 77.86 64.43 80.39† 67.04† 80.56† 67.10† 80.47† 67.10† Spanish 79.20 76.00 80.12∗ 77.24∗ 81.36∗ 78.30∗ 81.12∗ 77.99∗ 81.33∗ 78.16∗ Swedish 87.43 81.77 88.05∗ 82.77∗ 88.20∗ 83.02∗ 88.09∗ 82.87∗ 88.36∗ 83.16∗ Average 83.48 76.75 84.46 77.92 84.67 78.18 84.63 78.12 84.74 78.24 Table 3: Parsing accuracy (UAS and LAS, including punctuation) of the Covington non-projective parser with static, and dynamic monotonic and non-monotonic oracles on CoNLL-XI (first block) and CoNLLX (second block) datasets. For the dynamic non-monotonic oracle, we show the performance with the three loss expressions, where lower stands for the lower bound |U(c, tG)|, pc upper for the upper bound |U(c, tG)| + npc(A ∪I(c, tG)), and upper for the upper bound |U(c, tG)| + nc(A ∪I(c, tG)). For each language, we run five experiments with the same setup but different seeds and report the averaged accuracy. Best results for each language are shown in boldface. Statistically significant improvements (α = .05) of both dynamic oracles are marked with ∗if they are only over the static oracle, and with † if they are over the opposite dynamic oracle too. identifying problematic cycles is a difficult task to learn for the parser, and for this reason a more straightforward approach, which tries to avoid all kinds of cycles (regardless of whether they will cost gold arcs or not), can perform better. This also leads us to hypothesize that, even if it were feasible to build an oracle with the exact loss, it would not provide practical improvements over these approximate oracles; as it appears difficult for a statistical model to learn the situations where replacing a wrong arc with another indirectly helps due to breaking prospective cycles. It is also worth mentioning that the nonmonotonic dynamic oracle with the best loss expression accomplishes an average improvement over the static version (1.26 UAS) greater than that obtained by the monotonic oracle (0.98 UAS), resulting in 13 statistically significant improvements achieved by the non-monotonic variant over the static oracle in comparison to the 12 obtained by the monotonic system. Finally, note that, despite this remarkable performance, the non-monotonic version (regardless of the loss expression implemented) has an inexplicable drop in accuracy in Basque in comparison to the other two oracles. 6.2 Comparison In order to provide a broader contextualization of our approach, Table 4 presents a comparison of the average accuracy and parsing speed obtained by some well-known transition-based systems with dynamic oracles. Concretely, we include in this comparison both monotonic (Goldberg and Nivre, 2012) and non-monotonic (Honnibal et al., 2013) versions of the arc-eager parser, as well as the original monotonic Covington system (G´omez-Rodr´ıguez and Fern´andez-Gonz´alez, 2015). The three of them were ran with our own implementation so the comparison is homogeneous. We also report the published accuracy of the non-projective Attardi algorithm (G´omezRodr´ıguez et al., 2014) on the nineteen datasets used in our experiments. From Table 4 we can see that our approach achieves the best average UAS score, but is slightly slower at parsing time than the monotonic Covington algorithm. This can be explained by the fact that the non-monotonic parser has to take into consideration the whole set of transitions at each configuration (since all are allowed), while the monotonic parser only needs to evaluate a limited set of transitions in some con295 Average value Algorithm UAS LAS sent./s. G&N 2012 84.32 77.68 833.33 G-R et al. 2014* 83.78 78.64 G-R&F-G 2015 84.46 77.92 335.63 H et al. 2013 84.28 77.68 847.33 This work 84.74 78.24 236.74 Table 4: Comparison of the average Unlabeled and Labeled Attachment Scores (including punctuation) achieved by some widely-used transitionbased algorithms with dynamic oracles on nine CoNLL-X datasets and all CoNLL-XI datatsets, as well as their average parsing speed (sentences per second across all datasets) measured on a 2.30GHz Intel Xeon processor. The first block corresponds to monotonic parsers, while the second gathers non-monotonic parsers. All algorithms are tested under our own implementation, except for the system developed by G´omezRodr´ıguez et al. (2014) (marked with *) where we report the published results. figurations, speeding up the parsing process. 6.3 Error Analysis We also carry out some error analysis to provide some insights about how non-monotonicity is improving accuracy with respect to the original Covington parser. In particular, we notice that nonmonotonicity tends to be more beneficial on projective than on non-projective arcs. In addition, the non-monotonic algorithm presents a notable performance on long arcs (which are more prone to error propagation): average precision on arcs with length greater than 7 goes from 58.41% in the monotonic version to 63.19% in the non-monotonic parser, which may mean that non-monotonicity is alleviating the effect of error propagation. Finally, we study the effectiveness of non-monotonic arcs (i.e., those that break a previously-created arc), obtaining that, on average across all datasets tested, 36.86% of the arc transitions taken were non-monotonic, replacing an existing arc with a new one. Out of these transitions, 60.31% created a gold arc, and only 5.99% were harmful (i.e., they replaced a previously-built gold arc with an incorrect arc), with the remaining cases creating non-gold arcs without introducing extra errors (replacing a non-gold arc with another). These results back up the usefulness of non-monotonicity in transition-based parsing. 7 Conclusion We presented a novel, fully non-monotonic variant of the well-known non-projective Covington parser, trained with a dynamic oracle. Due to the unpredictability of a non-monotonic scenario, the real loss of each configuration cannot be computed. To overcome this, we proposed three different loss expressions that closely bound the loss and enable us to implement a practical non-monotonic dynamic oracle. On average, our non-monotonic algorithm obtains better performance than the monotonic version, regardless of which of the variants of the loss calculation is used. In particular, one of the loss expressions developed proved very promising by providing the best average accuracy, in spite of being the farthest approximation from the actual loss. On the other hand, the proposed lower bound makes the non-monotonic oracle the fastest one among all dynamic oracles developed for the non-projective Covington algorithm. To our knowledge, this is the first implementation of non-monotonicity for a nonprojective parsing algorithm, and the first approximate dynamic oracle that uses close, efficientlycomputable approximations of the loss, showing this to be a feasible alternative when it is not practical to compute the actual loss. While we used a perceptron classifier for our experiments, our oracle could also be used in neuralnetwork implementations of greedy transitionbased parsing (Chen and Manning, 2014; Dyer et al., 2015), providing an interesting avenue for future work. We believe that gains from both techniques should be complementary, as they apply to orthogonal components of the parsing system (the scoring model vs. the transition system), although we might see a ”diminishing returns”effect. Acknowledgments This research has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 714150 - FASTPARSE). The second author has received funding from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) from MINECO. 296 References Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL). pages 149–164. http://www.aclweb.org/anthology/W062920. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 740–750. http://www.aclweb.org/anthology/D14-1082. Michael A. Covington. 2001. A fundamental algorithm for dependency parsing. In Proceedings of the 39th Annual ACM Southeast Conference. ACM, New York, NY, USA, pages 95–102. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 334–343. http://www.aclweb.org/anthology/P15-1033. Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In Proceedings of COLING 2012. Association for Computational Linguistics, Mumbai, India, pages 959–976. http://www.aclweb.org/anthology/C12-1059. Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the Association for Computational Linguistics 1:403–414. http://anthology.aclweb.org/Q/Q13/Q13-1033.pdf. Carlos G´omez-Rodr´ıguez and Daniel Fern´andezGonz´alez. 2015. An efficient dynamic oracle for unrestricted non-projective parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers. pages 256–261. http://aclweb.org/anthology/P/P15/P15-2042.pdf. Carlos G´omez-Rodr´ıguez and Joakim Nivre. 2013. Divisible transition systems and multiplanar dependency parsing. Computational Linguistics 39(4):799–845. http://aclweb.org/anthology/J/J13/J13-4002.pdf. Carlos G´omez-Rodr´ıguez, Francesco Sartorio, and Giorgio Satta. 2014. A polynomial-time dynamic oracle for non-projective dependency parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 917–927. http://aclweb.org/anthology/D14-1099. Matthew Honnibal, Yoav Goldberg, and Mark Johnson. 2013. A non-monotonic arc-eager transition system for dependency parsing. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, CoNLL 2013, Sofia, Bulgaria, August 8-9, 2013. pages 163– 172. http://aclweb.org/anthology/W/W13/W133518.pdf. Matthew Honnibal and Mark Johnson. 2015. An improved non-monotonic transition system for dependency parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1373–1378. http://aclweb.org/anthology/D15-1162. Donald B. Johnson. 1975. Finding all the elementary circuits of a directed graph. SIAM Journal on Computing 4(1):77–84. https://doi.org/10.1137/0204007. Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). pages 122– 131. http://www.aclweb.org/anthology/D/D07/D071013.pdf. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT 03). ACL/SIGPARSE, pages 149–160. Joakim Nivre. 2008. Algorithms for Deterministic Incremental Dependency Parsing. Computational Linguistics 34(4):513–553. https://doi.org/10.1162/coli.07-056-R1-07-027. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007. pages 915–932. http://www.aclweb.org/anthology/D/D07/D071096.pdf. Robert Endre Tarjan. 1972. Depth-first search and linear graph algorithms. SIAM J. Comput. 1(2):146–160. http://dblp.unitrier.de/db/journals/siamcomp/siamcomp1.html. Alexander Volokh. 2013. Performance-Oriented Dependency Parsing. Doctoral dissertation, Saarland University, Saarbr¨ucken, Germany. 297 Alexander Volokh and G¨unter Neumann. 2012. Dependency parsing with efficient feature extraction. In Birte Glimm and Antonio Kr¨uger, editors, KI. Springer, volume 7526 of Lecture Notes in Computer Science, pages 253–256. https://doi.org/10.1007/978-3-642-33347-7. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th International Conference on Computational Linguistics (COLING). pages 947–953. http://aclweb.org/anthology/C/C00/C00-2137.pdf. 298
2017
27
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 299–309 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1028 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 299–309 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1028 Aggregating and Predicting Sequence Labels from Crowd Annotations An T. Nguyen1 Byron C. Wallace2 Junyi Jessy Li3 Ani Nenkova3 Matthew Lease 1 1University of Texas at Austin, 2Northeastern University, 3University of Pennsylvania, [email protected], [email protected], {ljunyi|nenkova}@seas.upenn.edu, [email protected] Abstract Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant. To predict sequences in unannotated text, we propose a neural approach using Long Short Term Memory. We evaluate a suite of methods across two different applications and text genres: Named-Entity Recognition in news articles and Information Extraction from biomedical abstracts. Results show improvement over strong baselines. Our source code and data are available online1. 1 Introduction Many important problems in Natural Language Processing (NLP) may be viewed as sequence labeling tasks, such as part-of-speech (PoS) tagging, named-entity recognition (NER), and Information Extraction (IE). As with other machine learning tasks, automatic sequence labeling typically requires annotated corpora on which to train predictive models. While such annotation was traditionally performed by domain experts, crowdsourcing has become a popular means to acquire large labeled datasets at lower cost, though annotations from laypeople may be lower quality than those from domain experts (Snow et al., 2008). It 1 Soure code and biomedical abstract data: www.github.com/thanhan/seqcrowd-acl17, www.byronwallace.com/EBM_abstracts_data is therefore essential to model crowdsourced label quality, both to estimate individual annotator reliability and to aggregate individual annotations to induce a single set of “reference standard” consensus labels. While many models have been proposed for aggregating crowd labels for binary or multiclass classification problems (Sheshadri and Lease, 2013), far less work has explored crowdbased annotation of sequences (Finin et al., 2010; Hovy et al., 2014; Rodrigues et al., 2014). In this paper, we investigate two complementary challenges in using sequential crowd labels: how to best aggregate them (Task 1); and how to accurately predict sequences in unannotated text given training data from the crowd (Task 2). For aggregation, one might want to induce a single set of high-quality consensus annotations for various purposes: (i) for direct use at run-time (when a given application requires human-level accuracy in identifying sequences); (ii) for sharing with others; or (iii) for training a predictive model. When human-level accuracy in tagging of sequences is not crucial, automatic labeling of unannotated text is typically preferable, as it is more efficient, scalable, and cost-effective. Given a training set of crowd labels, how can we best predict sequences in unannotated text? Should we: (i) consider Task 1 as a pre-processing step and train the model using consensus labels; or (ii) instead directly train the model on all of the individual annotations, as done by Yang et al. (2010)? We investigate both directions in this work. Our approach is to augment existing sequence labeling models such as HMMs (Rabiner and Juang, 1986) and LSTMs (Hochreiter and Schmidhuber, 1997; Lample et al., 2016) by introducing an explicit ”crowd component”. For HMMs, we model this crowd component by including additional parameters for worker label quality and crowd label variables. For the LSTM, we introduce a vector representation for each annotator. In 299 both cases, the crowd component models both the noise from labels and the label quality from each annotator. We find that principled combination of the “crowd component” with the “sequence component” yields strong improvement. For evaluation, we consider two practical applications in two text genres: NER in news and IE from medical abstracts. Recognizing namedentities such as people, organizations or locations can be viewed as a sequence labeling task in which each label specifies whether each word is Inside, Outside or Beginning (IOB) a namedentity. For this task, we consider the English portion of the CoNLL-2003 dataset (Tjong Kim Sang and De Meulder, 2003), using crowd labels collected by Rodrigues et al. (2014). For the IE application, we use a set of biomedical abstracts that describe Randomized Controlled Trials (RCTs). The crowdsourced annotations comprise labeled text spans that describe the patient populations enrolled in the corresponding RCTs. For example, an abstract may contain the text: we recruited and enrolled diabetic patients. Identifying these sequences is useful for downstream systems that process biomedical literature, e.g., clinical search engines (Huang et al., 2006; Schardt et al., 2007; Wallace et al., 2016). Contributions. We present a systematic investigation and evaluation of alternative methods for handling and utilizing crowd labels for sequential annotation tasks. We consider both how to best aggregate sequential crowd labels (Task 1) and how to best predict sequences in unannotated text given a training set of crowd annotations (Task 2). As part of this work, we propose novel models for working with noisy sequence labels from the crowd. Reported experiments both benchmark existing state-of-the-art approaches (sequential and non-sequential) and show that our proposed models achieve best-in-class performance. As noted in the Abstract, we have also shared our sourcecode and data online for use by the community. 2 Related Work We briefly review two separate threads of relevant prior work: (1) sequence labeling models; and (2) aggregation of crowdsourcing annotations. Sequence labeling. Early work on learning for sequential tasks used HMMs (Bikel et al., 1997). HMMs are a class of generative probabilistic models comprising two components: an emission model from a hidden state to an observation and a transition model from a hidden state to the next hidden state. Later work focused on discriminative models such as Maximum Entropy Models (Chieu and Ng, 2002) and Conditional Random Fields (CRFs) (Lafferty et al., 2001). These were able to achieve strong predictive performance by exploiting arbitrary features, but they may not be the best choice for label aggregation. Also, compared to the simple HMM model, discriminative sequentially structured models require more complex optimization and are generally more difficult to extend. Here we argue for the generative HMMs for our first task of aggregating crowd labels. The generative nature of HMMs is a good fit for existing crowd modeling techniques and also enables very efficient parameter estimation. In addition to the supervised setting, previous work has studied unsupervised HMMs, e.g., for PoS induction (Goldwater and Griffiths, 2007; Johnson, 2007). These works are similar to our work in trying to infer the hidden states without labeled data. Our graphical model is different in incorporating signal from the crowd labels. For Task 2 (training predictive models), we consider CRFs and LSTMs. CRFs are undirected, conditional models that can exploit arbitrary features. They have achieved strong performance on many sequence labeling tasks (McCallum and Li, 2003), but they depend on hand-crafted features. Recent work has considered end-to-end neural architectures that learn features, e.g., Convolutional Neural Networks (CNNs) (Collobert et al., 2011; Kim, 2014; Zhang and Wallace, 2015) and LSTMs (Lample et al., 2016). Here we modify the LSTM model proposed by Lample et al. (2016) by augmenting the network with ‘crowd worker vectors’. Crowdsourcing. Acquiring labeled data is critical for training supervised models. Snow et al. (2008) proposed using Amazon Mechanical Turk to collect labels in NLP quickly and at low cost, albeit with some degradation in quality. Subsequent work has developed models for improving aggregate label quality (Raykar et al., 2010; Felt et al., 2015; Kajino et al., 2012; Bi et al., 2014; Liu et al., 2012; Hovy et al., 2013). Sheshadri and Lease (2013) survey and benchmark methods. However, these models are almost all in the binary or multiclass classification setting; only a few have considered sequence labeling. Dredze et al. (2009) proposed a method for learning a CRF 300 model from multiple labels (although the identities of the annotators or workers were not used). Rodrigues et al. (2014) extended this approach to account for worker identities, providing a joint ”crowd-CRF” model. They collected a dataset of crowdsourced labels for a portion of the CoNLL 2003 dataset. Using this, they showed that their model outperformed Dredze et al. (2009)’s model and other baselines. However, due to the technical difficulty of the joint approach with CRFs, they resorted to strong modeling assumptions. For example, their model assumes that for each word, only one worker provides the correct answer while all others label the word completely randomly. While this assumption captures some aspects of label quality, it is potentially problematic, such as for ‘easy words’ labeled correctly by all workers. More recently, ? proposed HMM models for aggregating crowdsourced discourse segmentation labels. However, they did not consider the general sequence labeling setting. Their method includes task-specific assumptions, e.g., that discourse segment lengths follow some empirical distribution estimated from data. In the absence of a gold standard, they evaluated by checking that workers accuracies are consistent and by comparing their two models to each other. We include their approach along with Rodrigues et al. (2014) as a baseline in our evaluation. 3 Methods We present our Task 1 HMM approach in Section 3.1 and our Task 2 LSTM approach in Section 3.2. 3.1 HMMs with Crowd Workers Model: We first define a standard HMM with hidden states hi, observations vi, transition parameter vectors τ hi and emission parameter vectors Ωhi: hi+1|hi ∼Discrete(τ hi) (1) vi|hi ∼Discrete(Ωhi) (2) The discrete distributions here are governed by Multinomials. In the context of our task, vi is the word at position i and hi is the true, latent class of vi (e.g., entity or non-entity). For the crowd component, assume there are n classes, and let lij be the label for word i provided by worker j. Further, let C(j) be the confusion matrix for worker j, i.e., C(j) k is a vector of size n in which element k′ is the probability of worker j lij Discrete C(j) hi hi−1 hi+1 m workers Discrete vi Ω Figure 1: The factor graph for our HMM-Crowd model. Dotted rectangles are gates, where the value of hi is used to select the parameters for the Multinomial governing the Discrete distribution. providing the label k′ for a word of true class k: lij|hi ∼Discrete(C(j) hi ) (3) Figure 1 shows the factor graph of this model, which we call HMM-Crowd. Note that we assume that individual crowdworker labels are conditionally independent given the (hidden) true label. A common problem with crowdsourcing models is data sparsity. For workers who provide only a few labels, it is hard to derive a good estimate of their confusion matrices. This is exacerbated when the label distribution is imbalanced, e.g., most words are not part of a named entity, concentrating the counts in a few confusion matrix entries. Solutions for this problem include hierarchical models of ‘worker communities’ (Venanzi et al., 2014) or correlations between confusion matrix entries (Nguyen et al., 2016). Although effective, these methods are also quite computationally expensive. For our models, to keep parameter estimation efficient, we use a simpler solution of ‘collapsing’ the confusion matrix into a ‘confusion vector’. For worker j, instead of having the n × n matrix C(j), we use the n × 1 vector C′(j) where C′(j) k is the probability of worker j labeling a word with true class k correctly. We also smooth the estimate of C′ with prior counts as in (Liu and Wang, 2012; Kim and Ghahramani, 2012). Learning: We use the Expectation Maximization (EM) algorithm (Dempster et al., 1977) to learn the parameters (τ, Ω, C′), given the observations (all the words V and all the worker labels L). In the E-step, given the current estimates of the parameters, we take a forward and a backward 301 pass in the HMM to infer the hidden states, i.e. to calculate p(hi|V, L) and p(hi, hi+1|V, L) for all appropriate i. Let α(hi) = p(hi, v1:i, l1:i) where v1:i are the words from position 1 to i and l1:i are the crowd labels for these words from all workers. Similarly, let β(hi) = p(vi+1:n, li+1:n|hi). We have the recursions: α(hi) = X hi−1 p(vi|hi)p(hi|hi−1) Y j p(lij|hi)α(hi−1) (4) β(hi) = X hi+1 p(hi+1|hi)p(vi+1|hi+1) Y j p(li+1,j|hi+1)β(hi+1) (5) These are the standard α and β recursions for HMMs augmented with the crowd model: the product Q j over the workers j who have provided labels for word i (or i + 1). The posteriors can then be easily evaluated: p(hi|V, L) ∝ α(hi)β(hi) and p(hi, hi+1|V, L) ∝ α(hi)p(hi+1|hi)p(vi+1|hi+1)β(hi+1) In the standard M-step, the parameters are estimated using maximum likelihood. However, we found a Variational Bayesian (VB) update procedure for the HMM parameters similar to (Johnson, 2007; Beal, 2003) provides some improvement and stability. We first define the Dirichlet priors over the transition and emission parameters: p(τ hi) = Dir(at) (6) p(Ωhi) = Dir(ae) (7) With these priors, the variational M-step updates the parameters as follows2: τ h′|h = exp{Ψ(Eh′|h + at)} exp{Ψ(Eh + nat)} (8) Ωv|h = exp{Ψ(Ev|h + ae)} exp{Ψ(Eh + mae)} (9) where Ψ is the Digamma function, n is the number of states and m is the number of observations. E denotes the expected counts, calculated from the posteriors inferred in the E-step. Eh′|h is the expected number of times the HMM transitioned from state h to state h′, where the expectation is with respect to the posterior distribution p(hi, hi+1|V, L) that we infer in the E step: Eh′|h = X i p(hi = h, hi+1 = h′|V, L) (10) 2See Beal (2003) for the derivation and Johnson (2007) for further discussion for the Variational Bayesian approach. Similarly, Eh is the expected number of times the HMM is at state h: Eh = P i p(hi = h|V, L) and Ev|h is the expected number of times the HMM emits the observation v from the state h: Ev|h = P i,vi=v p(hi = h|V, L). For the crowd parameters C′(j), we use the (smoothed) maximum likelihood estimate: C′(j) k = E(j) k|k + ac E(j) k + nac (11) where ac is the smoothing parameter and E(j) k|k is the expected number of times that worker j correctly labeled a word of true class k as k while E(j) k is the expected total number of words belonging to class k worker j has labeled. Again, the expectation in E is taken under the posterior distributions that we infer in the E step. 3.2 Long Short Term Memory with Crowds For Task 2, we extend the LSTM architecture (Hochreiter and Schmidhuber, 1997) for NER (Lample et al., 2016) to account for noisy crowdsourced labels (this can be easily adapted to other sequence labeling tasks). In this model, the sentence input is first fed into an LSTM block (which includes character- and word-level bi-directional LSTM units). The LSTM block’s output then becomes input to a (fully connected) hidden layer, which produces a vector of tags scores for each word. This tag score vector is the word-level prediction, representing the likelihood of the word being from each tag. All the tags scores are then fed into a ‘CRF layer’ that ‘connects’ the word-level predictions in the sentence and produces the final output: the most likely sequence of tags. We introduce a crowd representation in which a worker vector represents the noise associated with her labels. In other words, the parameters in the original architecture learns the correct sequence labeling model while the crowd vectors add noise to its predictions to ‘explain’ the lower quality of the labels. We assume a perfect worker has a zero vector as her representation while an unreliable worker is represented by a large magnitude vector. At test time, we ignore the crowd component and make predictions by feeding the unlabeled sentence into the original LSTM architecture. At train time, an example consists of the labeled sentence and the ID of the worker who provided the labels. Worker IDs are mapped to vector representations and incorporated into the LSTM architecture. 302 LSTM Hidden Layer Tags Scores CRF + Crowd Vector Worker ID Sentence ... Figure 2: The LSTM-Crowd model. The Crowd Vector is added (element-wise) to the Tags Scores. LSTM Hidden Layer Tags Scores CRF Crowd Vector Worker ID Sentence ... Figure 3: The LSTM-Crowd-cat model. The crowd vectors provide additional input for the Hidden Layer (they are effectively concatenated to the output of the LSTM block). We propose two strategies for incorporating the crowd vector into the LSTM: (1) adding the crowd vector to the tags scores and (2) concatenating the crowd vector to the output of the LSTM block. LSTM-Crowd. The first strategy is illustrated in Figure 2. We set the dimension of the crowd vectors to be equal to the number of tags and the addition is element-wise. In this strategy, the crowd vectors have a nice interpretation: the tagconditional noise for the worker. This is useful for worker evaluation and intelligent task routing (i.e. assigning the right work to the right worker). LSTM-Crowd-cat. The second strategy is illustrated in Figure 3. We set the crowd vectors to be additional inputs for the Hidden Layer (along with the LSTM block output). In this way, we are free to set the dimension of the crowd vectors and we have a more flexible model of worker noise. This comes with a cost of reduced interpretability and additional parameters in the hidden layer. For both strategies, the crowd vectors are randomly initialized and learned in the same LSTM architecture using Back Propagation (Rumelhart et al., 1985) and Stochastic Gradient Descent (SGD) (Bottou, 2010). Dataset Application Size Gold Crowd CoNLL’03 NER 1393 All 400 Medical IE 5000 200 All Table 1: Datasets used for each application. We list the total number of articles/abstracts and the number which have Gold/Crowd labels. 4 Evaluation Setup 4.1 Datasets & Tuning NER. We use the English portion of the CoNLL2003 dataset (Tjong Kim Sang and De Meulder, 2003), which includes over 21,000 annotated sentences from 1,393 news articles split into 3 sets: train, validation and test. We also use crowd labels collected by Rodrigues et al. (2014) for 400 articles in the train set3. For Task 1 (aggregating crowd labels), to avoid overfitting, we split these 400 articles into 50% validation and 50% test4. For Task 2 (predicting sequences on unannotated text), we follow Rodrigues et al. (2014) in using the CoNLL validation and test sets. Biomedical IE. We use 5,000 medical paper abstracts describing randomized control trials (RCTs) involving people. Each abstract is annotated by roughly 5 Amazon Mechanical Turk workers. Annotators were asked to mark all text spans in a given abstract which identify the population enrolled in the clinical trial. The annotations are therefore binary: inside or outside a span. In addition to annotations collected from laypeople via Mechanical Turk, we also use gold annotations by medical students for a small set of 200 abstracts, which we split into 50% validation and 50% test. For Task 1, we run methods being compared on all 5,000 abstracts, but we evaluate them only using the validation/test set. For Task 2, the validation and test sets are held out. Table 1 presents key statistics of datasets used. Tuning: In all experiments, validation set results are used to tune the models hyper-parameters. For HMM-Crowd, we have a smoothing parameter and two Dirichlet priors. For our two LSTMs, we have a L2 regularization parameter. For LSTMCrowd-cat, we also have the crowd vector dimen3http://www.fprodrigues.com/software/ crf-ma-sequence-labeling-with-multiple-annotators/ 4Rodrigues et al. (2014)’s results on the ‘training set’ are not directly comparable to ours since they do not partition the crowd labels into validation and test sets. 303 sion. For each hyper-parameter, we consider a few (less then 5) different parameter settings for light tuning. We report results achieved on the test set. 4.2 Baselines Task 1. For aggregating crowd labels, we consider the following baselines: • Majority Voting (MV) at the token level. Rodrigues et al. (2014) show that this generally performs better than MV at the entity level. • Dawid and Skene (1979) weighted voting at the token level. We tested both a popular public implementation5 of Dawid-Skene and our own and found that ours performed better (likely due to smoothing), so we report it. • MACE (Hovy et al., 2013), using the authors’ public implementation6. • Dawid-Skene then HMM. We propose a simple heuristic to aggregate sequential crowd labels: (1) use Dawid and Skene (1979) to induce consensus labels from individual crowd labels; (2) train a HMM using the input text and consensus labels; and then (3) use the trained HMM to predict and output labels for the input text. We also tried using a CRF or LSTM as the sequence labeler but found the HMM performed best. This is not surprising: CRFs and LSTM are good at predicting unseen sequences, whereas the predictions here are on the seen training sequences. • Rodrigues et al. (2014)’s CRF with Multiple Annotators (CRF-MA). We use the source code provided by the authors. • ?’s Interval-dependent (ID) HMM using the authors’ source code7. Since they assume binary labels, we can only apply this to the biomedical IE task. For non-sequential aggregation baselines, we evaluate majority voting (MV) and Dawid and Skene (1979) as perhaps the most widely known and used in practice. A recent benchmark evaluation of aggregation methods for (non-sequential) crowd labels found that classic Dawid-Skene was the most consistently strong performing method 5https://github.com/ipeirotis/Get-Another-Label 6http://www.isi.edu/publications/licensed-sw/mace/ 7https://academiccommons.columbia.edu/catalog/ac:199939 among those considered, despite its age, while majority voting was often outperformed by other methods (Sheshadri and Lease, 2013). Dawid and Skene (1979) models a confusion matrix for each annotator, using EM estimation of these matrices as parameters and the true token labels as hidden variables. This is roughly equivalent to our proposed HMM-Crowd model (Section 3), but without the HMM component. Task 2. To predict sequences on unannotated text when trained on crowd labels, we consider two broad approaches: (1) directly train the model on all individual crowd annotations; and (2) induce consensus labels via Task 1 and train on them. For approach (1), we report as baselines: • Rodrigues et al. (2014)’s CRF-MA • Lample et al. (2016)’s LSTM trained on all crowd labels (ignoring worker IDs) For approach (2), we report as baselines: • Majority Voting (MV) then Conditional Random Field (CRF). We train the CRF using the CRF Suite package (Okazaki, 2007) with the same features as in Rodrigues et al. (2014), who also report this baseline. • Lample et al. (2016)’s LSTM trained on Dawid-Skene (DS) consensus labels. 4.3 Metrics NER. We use the CoNLL 2003 metrics of entitylevel precision, recall and F1. The predicted entity must match the gold entity exactly (i.e. no partial credit is given for partial matches). Biomedical IE. The above metrics are overly strict for the biomedical IR task, in which annotated sequences are typically far longer than named-entities. We therefore ‘relax’ the metric to credit partial matches as follows. For each predicted positive contiguous text span, we calculate: Precision = # true positive words # words in this predicted span For example, for a predicted span of 10 words, if 6 words are truly positive, the Precision is 60%. We evaluate this ‘local’ precision for each predicted span and then take the average as the ‘global’ precision. Similarly, for each gold span, we calculate: Recall = # words in a predicted span # words in this gold span 304 Method Precision Recall F1 Majority Vote 78.35 56.57 65.71 MACE 65.10 69.81 67.37 Dawid-Skene (DS) 78.05 65.78 71.39 CRF-MA 80.29 51.20 62.53 DS then HMM 76.81 71.41 74.01 HMM-Crowd 77.40 72.29 74.76 Table 2: NER results for Task 1 (crowd label aggregation). Rows 1-3 show non-sequential methods while Rows 4-6 show sequential methods. The recall scores for all the gold spans are again averaged to get a global recall score. For the biomedical IE task, because we have gold labels for only a small set of 200 abstracts, we create 100 bootstrap re-samples of the (predicted and gold) spans and perform the evaluation for each re-sample. We then report the mean and standard deviation over these 100 re-samples. 5 Evaluation Results 5.1 Named-Entity Recognition (NER) Table 2 presents Task 1 results for aggregating crowd labels. For the non-sequential aggregation baselines, we see that Dawid and Skene (1979) outperforms both majority voting and MACE (Hovy et al., 2013). For sequential methods, our heuristic ‘Dawid-Skene then HMM’ method performs surprisingly well, nearly as well as HMM-Crowd. However, we will see that this heuristic does not work as well for biomedical IR. Rodrigues et al. (2014)’s CRF-MA achieves the highest Precision of all methods, but surprisingly the lowest F1. We use their public implementation but observe different results from what they report (we observed similar results when using all the crowd data without validation/test split as they do). We suspect their released source code may be optimized for Task 2, though we could not reach the authors to verify this. Table 3 reports NER results for Task 2: predicting sequences on unannotated text when trained on crowd labels. Results for Rodrigues et al. (2014)’s CRF-MA are reproduced using their public implementation and match their reported results. While CRF-MA outperforms ‘Majority Vote then CRF’ as the authors reported, and achieves the highest Recall of all methods, its F1 results are generally not competitive with other methods. Methods based on Lample et al. (2016)’s LSTM generally outperform the CRF methods. Adding a crowd component to the LSTM yields marked improvement of 2.5-3 points F1 vs. the LSTM trained on individual crowd annotations or consensus MV annotations. LSTM-Crowd (trained on individual labels) and ‘HMM-Crowd then LSTM’ (LSTM trained on HMM consensus labels) offer different paths to achieving comparable, best results. 5.2 Biomedical Information Extraction (IE) Tables 4 and 5 present Biomedical IE results for Tasks 1 and 2, respectively. We were unable to run Rodrigues et al. (2014)’s CRF-MA public implementation on the Biomedical IE dataset (due to an ‘Out of Memory Error’ with 8gb max heapsize). For Task 1, Majority Vote achieves nearly 92% Precision but suffers from very low Recall. As with NER, HMM-Crowd achieves the highest Recall and F1, showing 2 points F1 improvement here over non-sequential Dawid and Skene (1979). In contrast with the NER results, our heuristic ‘Dawid-Skene then HMM’ performs much worse for Biomedical IE. In general, we expect heuristics to be less robust than principled methods. For Task 2, as with NER, we again see that LSTM-Crowd (trained on individual labels) and ‘HMM-Crowd then LSTM’ (LSTM trained on HMM consensus labels) offer different paths to achieving fairly comparable results. While LSTM-Crowd-cat again achieves slightly lower F1, simply training Lample et al. (2016)’s LSTM directly on all crowd labels performs much better than seen earlier with NER, likely due to the relatively larger size of this dataset (see Table 1). To further investigate, we study the performances of these LSTM models as a function of training data available. In Figure 4, we see that as the amount of training data decreases, our crowd-augmented LSTM models produce greater relative improvement compared to the original LSTM architecture. Table 6 presents an example from Task 1 of a sentence with its gold span, annotations and the outputs from Dawid-Skene and HMM-Crowd. Dawid-Skene aggregates labels based only on the crowd labels while our HMM-Crowd combines that with a sequence model. HMM-Crowd is able to return the longer part of the correct span. 305 Method Precision Recall F1 CRF-MA (Rodrigues et al., 2014) 49.40 85.60 62.60 LSTM (Lample et al., 2016) 83.19 57.12 67.73 LSTM-Crowd 82.38 62.10 70.82 LSTM-Crowd-cat 79.61 62.87 70.26 Majority Vote then CRF 45.50 80.90 58.20 Dawid-Skene then LSTM 72.30 61.17 66.27 HMM-Crowd then CRF 77.40 61.40 68.50 HMM-Crowd then LSTM 76.19 66.24 70.87 LSTM on Gold Labels (upper-bound) 85.27 83.19 84.22 Table 3: NER results on Task 2: predicting sequences on unannotated text when trained on crowd labels. Rows 1-4 train the predictive model using individual crowd labels, while Rows 5-8 first aggregate crowd labels then train the model on the induced consensus labels. The last row indicates an upper-bound from training on gold labels. LSTM-Crowd and LSTM-Crowd-cat are described in Section 3. Method Precision Recall F1 std Majority Vote 91.89 48.03 63.03 2.6 MACE 45.01 88.49 59.63 1.7 Dawid-Skene 77.85 66.77 71.84 1.7 Dawid-Skene then HMM 72.49 58.77 64.86 2.0 ID HMM (?) 78.99 68.10 73.11 1.9 HMM-Crowd 72.81 75.14 73.93 1.8 Table 4: Biomedical IE results for Task 1: aggregating sequential crowd labels to induce consensus labels. Rows 1-3 indicate non-sequential baselines. Results are averaged over 100 bootstrap re-samples. We report the standard deviation of F1, std, due to this dataset having fewer gold labels for evaluation. Method Precision Recall F1 std LSTM (Lample et al., 2016) 77.43 61.13 68.27 1.9 LSTM-Crowd 73.83 63.93 68.47 1.6 LSTM-Crowd-cat 68.08 68.41 68.20 1.8 Majority Vote then CRF 93.71 33.16 48.92 2.8 Dawid-Skene then LSTM 70.21 65.26 67.59 1.7 HMM-Crowd then CRF 79.54 54.76 64.81 2.0 HMM-Crowd then LSTM 73.65 64.64 68.81 1.9 Table 5: Biomedical IE results for Task 2. Rows 1-3 correspond to training on all labels, while Rows 4-7 first aggregate crowd labels then train the sequence labeling model on consensus annotations. 306 Gold ... was as safe and effective as ... for the empiric treatment of acute invasive diarrhea in ambulatory pediatric patients requiring an emergency room visit Annotations ... was as safe and effective as ... for the empiric treatment of acute invasive diarrhea (2 out of 5) in ambulatory pediatric patients requiring an emergency room visit Dawid-Skene ... was as safe and effective as ... for the empiric treatment of acute invasive diarrhea in ambulatory pediatric patients requiring an emergency room visit HMM-Crowd ... was as safe and effective as ... for the empiric treatment of acute invasive diarrhea in ambulatory pediatric patients requiring an emergency room visit Table 6: An example from the medical abstract dataset for task 1: inferring true labels. Out of 5 annotations, only 2 have identified a positive span (the other 3 are empty). Dawid-Skene is able to assign higher weights to the minority of 2 annotations to return a part of the correct span. HMM-Crowd returns a longer part of the span, which we believe is due to useful signal from its sequence model. Figure 4: F1 scores in Task 2 for biomedical IE with varying percentages of training data. 6 Conclusions and Future Work Given a dataset of crowdsourced sequence labels, we presented novel methods to: (1) aggregate sequential crowd labels to infer a best single set of consensus annotations; and (2) use crowd annotations as training data for a model that can predict sequences in unannotated text. We evaluated our approaches on two datasets representing different domains and tasks: general NER and biomedical IE. Results showed that our methods show improvement over strong baselines. We expect our methods to be applicable to and similarly benefit other sequence labeling tasks, such as POS tagging and chunking (Hovy et al., 2014). Our methods also provide an estimate of each worker’s label quality, which can be transfered between tasks and is useful for error analysis and intelligent task routing (Bragg et al., 2014). We also plan to investigate extension of the crowd component in our HMM method with hierarchical models, as well as a fully-Bayesian approach. Acknowledgements We thank the reviewers for their valuable comments. This work is supported in part by by National Science Foundation grant No. 1253413 and the National Cancer Institute (NCI) of the National Institutes of Health (NIH), award number UH2CA203711. Any opinions, findings, and conclusions or recommendations expressed by the authors are entirely their own and do not represent those of the sponsoring agencies. References Matthew James Beal. 2003. Variational algorithms for approximate Bayesian inference. University of London United Kingdom. Wei Bi, Liwei Wang, James T. Kwok, and Zhuowen Tu. 2014. Learning to predict from crowdsourced data. In Uncertainty in Artificial Intelligence. Daniel M Bikel, Scott Miller, Richard Schwartz, and Ralph Weischedel. 1997. Nymble: a high-performance learning name-finder. In Proceedings of the fifth conference on Applied natural language processing. Association for Computational Linguistics, pages 194–201. https://doi.org/10.3115/974557.974586. L´eon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, Springer, pages 177–186. Jonathan Bragg, Andrey Kolobov, Mausam Mausam, and Daniel S Weld. 2014. Parallel task routing for crowdsourcing. In Second AAAI Conference on Human Computation and Crowdsourcing. Hai Leong Chieu and Hwee Tou Ng. 2002. Named entity recognition: a maximum entropy approach using global information. In Proceedings of the 19th international conference on Computational linguisticsVolume 1. Association for Computational Linguis307 tics, pages 1–7. http://aclweb.org/anthology/C021025. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. Alexander Philip Dawid and Allan M Skene. 1979. Maximum likelihood estimation of observer errorrates using the em algorithm. Applied statistics pages 20–28. Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society. Series B (methodological) pages 1–38. Mark Dredze, Partha Pratim Talukdar, and Koby Crammer. 2009. Sequence learning from data with multiple labels. In ECML-PKDD 2009 workshop on Learning from Multi- Label Data. Paul Felt, Eric Ringger, Kevin Seppi, and Robbie Haertel. 2015. Early gains matter: A case for preferring generative over discriminative crowdsourcing models. In Conference of the North American Chapter of the Association for Computational Linguistics. https://doi.org/10.3115/v1/N15-1089. Tim Finin, Will Murnane, Anand Karandikar, Nicholas Keller, Justin Martineau, and Mark Dredze. 2010. Annotating named entities in twitter data with crowdsourcing. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Association for Computational Linguistics, pages 80–88. Sharon Goldwater and Tom Griffiths. 2007. A fully bayesian approach to unsupervised part-of-speech tagging. In Annual meeting-association for computational linguistics. Citeseer, volume 45, page 744. http://aclweb.org/anthology/P07-1094. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with mace. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 1120–1130. http://aclweb.org/anthology/N13-1132. Dirk Hovy, Barbara Plank, and Anders Søgaard. 2014. Experiments with crowdsourced re-annotation of a pos tagging data set. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 377–382. https://doi.org/10.3115/v1/P14-2062. Xiaoli Huang, Jimmy Lin, and Dina Demner-Fushman. 2006. PICO as a Knowledge Representation for Clinical Questions. In AMIA 2006 Symposium Proceedings. pages 359–363. Ziheng Huang, Jialu Zhong, and Rebecca J. Passonneau. 2015. Estimation of discourse segmentation labels from crowd data. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2190– 2200. http://aclweb.org/anthology/D15-1261. Mark Johnson. 2007. Why doesn’t em find good hmm pos-taggers? In EMNLP-CoNLL. pages 296–305. http://aclweb.org/anthology/D07-1031. Hiroshi Kajino, Yuta Tsuboi, and Hisashi Kashima. 2012. A convex formulation for learning from crowds. In Twenty-Sixth AAAI Conference on Artificial Intelligence. Hyun-Chul Kim and Zoubin Ghahramani. 2012. Bayesian classifier combination. In International conference on artificial intelligence and statistics. pages 619–627. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1746– 1751. http://www.aclweb.org/anthology/D14-1181. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the eighteenth international conference on machine learning, ICML. volume 1, pages 282–289. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 260–270. https://doi.org/10.18653/v1/N16-1030. Chao Liu and Yi-min Wang. 2012. Truelabel+ confusions: A spectrum of probabilistic models in analyzing multiple ratings. In Proceedings of the 29th International Conference on Machine Learning (ICML-12). pages 225–232. Qiang Liu, Jian Peng, and Alex T Ihler. 2012. Variational inference for crowdsourcing. In Advances in Neural Information Processing Systems. pages 692– 700. Andrew McCallum and Wei Li. 2003. Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, 308 chapter Early results for Named Entity Recognition with Conditional Random Fields, Feature Induction and Web-Enhanced Lexicons. http://aclweb.org/anthology/W03-0430. An T Nguyen, Byron C Wallace, and Matthew Lease. 2016. A correlated worker model for grouped, imbalanced and multitask data. In Uncertainty in Artificial Intelligence. Naoaki Okazaki. 2007. Crfsuite: a fast implementation of conditional random fields (crfs). http://www.chokkan.org/software/crfsuite/. Lawrence Rabiner and B Juang. 1986. An introduction to hidden markov models. ieee assp magazine 3(1):4–16. Vikas C Raykar, Shipeng Yu, Linda H Zhao, Gerardo Hermosillo Valadez, Charles Florin, Luca Bogoni, and Linda Moy. 2010. Learning from crowds. Journal of Machine Learning Research 11(Apr):1297–1322. Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. 2014. Sequence labeling with multiple annotators. Machine learning 95(2):165–181. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1985. Learning internal representations by error propagation. Technical report, DTIC Document. Connie Schardt, Martha B Adams, Thomas Owens, Sheri Keitz, and Paul Fontelo. 2007. Utilization of the PICO framework to improve searching PubMed for clinical questions. BMC medical informatics and decision making 7(1):16. Aashish Sheshadri and Matthew Lease. 2013. Square: A benchmark for research on computing crowd consensus. In First AAAI Conference on Human Computation and Crowdsourcing. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast – but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 254–263. http://aclweb.org/anthology/D081027. Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition, pages 142–147. http://aclweb.org/anthology/W030419. Matteo Venanzi, John Guiver, Gabriella Kazai, Pushmeet Kohli, and Milad Shokouhi. 2014. Community-based bayesian aggregation models for crowdsourcing. In Proceedings of the 23rd international conference on World wide web. ACM, pages 155–164. Byron C Wallace, Jo¨el Kuiper, Aakash Sharma, Mingxi Brian Zhu, and Iain J Marshall. 2016. Extracting pico sentences from clinical trial reports using supervised distant supervision. Journal of Machine Learning Research 17(132):1–25. Hui Yang, Anton Mityagin, Krysta M Svore, and Sergey Markov. 2010. Collecting high quality overlapping labels at low cost. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 459–466. Ye Zhang and Byron Wallace. 2015. A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:1510.03820 . 309
2017
28
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 310–320 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1029 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 310–320 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1029 Multi-space Variational Encoder-Decoders for Semi-supervised Labeled Sequence Transduction Chunting Zhou, Graham Neubig Language Technologies Institute Carnegie Mellon University ctzhou,[email protected] Abstract Labeled sequence transduction is a task of transforming one sequence into another sequence that satisfies desiderata specified by a set of labels. In this paper we propose multi-space variational encoderdecoders, a new model for labeled sequence transduction with semi-supervised learning. The generative model can use neural networks to handle both discrete and continuous latent variables to exploit various features of data. Experiments show that our model provides not only a powerful supervised framework but also can effectively take advantage of the unlabeled data. On the SIGMORPHON morphological inflection benchmark, our model outperforms single-model state-ofart results by a large margin for the majority of languages.1 1 Introduction This paper proposes a model for labeled sequence transduction tasks, tasks where we are given an input sequence and a set of labels, from which we are expected to generate an output sequence that reflects the content of the input sequence and desiderata specified by the labels. Several examples of these tasks exist in prior work: using labels to moderate politeness in machine translation results (Sennrich et al., 2016), modifying the output language of a machine translation system (Johnson et al., 2016), or controlling the length of a summary in summarization (Kikuchi et al., 2016). In particular, however, we are motivated by the task of morphological reinflection (Cotterell et al., 1An implementation of our model are available at https://github.com/violet-zct/ MSVED-morph-reinflection. playing played POS=Verb, Tense=Past Model plays Supervised Learning Semi-Supervised Learning Figure 1: Standard supervised labeled sequence transduction, and our proposed semi-supervised method. 2016), which we will use as an example in our description and test bed for our models. In morphologically rich languages, different affixes (i.e. prefixes, infixes, suffixes) can be combined with the lemma to reflect various syntactic and semantic features of a word. The ability to accurately analyze and generate morphological forms is crucial to creating applications such as machine translation (Chahuneau et al., 2013; Toutanova et al., 2008) or information retrieval (Darwish and Oard, 2007) in these languages. As shown in 1, re-inflection of an inflected form given the target linguistic labels is a challenging subtask of handling morphology as a whole, in which we take as input an inflected form (in the example, “playing”) and labels representing the desired form (“pos=Verb, tense=Past”) and must generate the desired form (“played”). Approaches to this task include those utilizing hand-crafted linguistic rules and heuristics (Taji et al., 2016), as well as learning-based approaches using alignment and extracted transduction rules (Durrett and DeNero, 2013; Alegria and Etxeberria, 2016; Nicolai et al., 2016). There have also been methods proposed using neural sequenceto-sequence models (Faruqui et al., 2016; Kann et al., 2016; Ostling, 2016), and currently ensembles of attentional encoder-decoder models (Kann and Sch¨utze, 2016a,b) have achieved state-of-art results on this task. One feature of these neural models however, is that they are trained in a 310 largely supervised fashion (top of Fig. 1), using data explicitly labeled with the input sequence and labels, along with the output representation. Needless to say, the ability to obtain this annotated data for many languages is limited. However, we can expect that for most languages we can obtain large amounts of unlabeled surface forms that may allow for semi-supervised learning over this unlabeled data (entirety of Fig. 1).2 In this work, we propose a new framework for labeled sequence transduction problems: multi-space variational encoder-decoders (MSVED, §3.3). MSVEDs employ continuous or discrete latent variables belonging to multiple separate probability distributions3 to explain the observed data. In the example of morphological reinflection, we introduce a vector of continuous random variables that represent the lemma of the source and target words, and also one discrete random variable for each of the labels, which are on the source or the target side. This model has the advantage of both providing a powerful modeling framework for supervised learning, and allowing for learning in an unsupervised setting. For labeled data, we maximize the variational lower bound on the marginal log likelihood of the data and annotated labels. For unlabeled data, we train an auto-encoder to reconstruct a word conditioned on its lemma and morphological labels. While these labels are unavailable, a set of discrete latent variables are associated with each unlabeled word. Afterwards we can perform posterior inference on these latent variables and maximize the variational lower bound on the marginal log likelihood of data. Experiments on the SIGMORPHON morphological reinflection task (Cotterell et al., 2016) find that our model beats the state-of-the-art for a single model in the majority of languages, and is particularly effective in languages with more complicated inflectional phenomena. Further, we find that semi-supervised learning allows for significant further gains. Finally, qualitative evaluation of lemma representations finds that our model is able to learn lemma embeddings that match with human intuition. 2Faruqui et al. (2016) have attempted a limited form of semi-supervised learning by re-ranking with a standard ngram language model, but this is not integrated with the learning process for the neural model and gains are limited. 3Analogous to multi-space hidden Markov models (Tokuda et al., 2002) 2 Labeled Sequence Transduction In this section, we first present some notations regarding labeled sequence transduction problems in general, then describe a particular instantiation for morphological reinflection. Notation: Labeled sequence transduction problems involve transforming a source sequence x(s) into a target sequence x(t), with some labels describing the particular variety of transformation to be performed. We use discrete variables y(t) 1 , y(t) 2 , · · · , y(t) K to denote the labels associated with each target sequence, where K is the total number of labels. Let y(t) = [y(t) 1 , y(t) 2 , · · · , y(t) K ] denote a vector of these discrete variables. Each discrete variable y(t) k represents a categorical feature pertaining to the target sequence, and has a set of possible labels. In the later sections, we also use y(t) and y(t) k to denote discrete latent variables corresponding to these labels. Given a source sequence x(s) and a set of associated target labels y(t), our goal is to generate a target sequence x(t) that exhibits the features specified by y(t) using a probabilistic model p(x(t)|x(s), y(t)). The best target sequence ˆx(t) is then given by: ˆx(t) = arg max x(t) p(x(t)|x(s), y(t)). (1) Morphological Reinflection Problem: In morphological reinflection, the source sequence x(s) consists of the characters in an inflected word (e.g., “played”), while the associated labels y(t) describe some linguistic features (e.g., y(t) pos = Verb, y(t) tense = Past) that we hope to realize in the target. The target sequence x(t) is therefore the characters of the re-inflected form of the source word (e.g., “played”) that satisfy the linguistic features specified by y(t). For this task, each discrete variable y(t) k has a set of possible labels (e.g. pos=V, pos=ADJ, etc) and follows a multinomial distribution. 3 Proposed Method 3.1 Preliminaries: Variational Autoencoder As mentioned above, our proposed model uses probabilistic latent variables in a model based on neural networks. The variational autoencoder (Kingma and Welling, 2014) is an efficient way to handle (continuous) latent variables in neural 311 (a) VAE y(t) x(t) x(t) x(s) x(s) x x x z y z z z z y(t) y (b) Labeled MSVAE (c) MSVAE (d) Labeled MSVED (e) MSVED Figure 2: Graphical models of (a) VAE, (b) labeled MSVAE, (c) MSVAE, (d) labeled MSVED, and (e) MSVED. White circles are latent variables and shaded circles are observed variables. Dashed lines indicate the inference process while the solid lines indicate the generative process. models. We describe it briefly here, and interested readers can refer to Doersch (2016) for details. The VAE learns a generative model of the probability p(x|z) of observed data x given a latent variable z, and simultaneously uses a recognition model q(z|x) at learning time to estimate z for a particular observation x (Fig. 2(a)). q(·) and p(·) are modeled using neural networks parameterized by φ and θ respectively, and these parameters are learned by maximizing the variational lower bound on the marginal log likelihood of data: log pθ(x) ≥Ez∼qφ(z|x)[log pθ(x|z)]− KL(qφ(z|x)||p(z)) (2) The KL-divergence term (a standard feature of variational methods) ensures that the distributions estimated by the recognition model qφ(z|x) do not deviate far from our prior probability p(z) of the values of the latent variables. To optimize the parameters with gradient descent, Kingma and Welling (2014) introduce a reparameterization trick that allows for training using simple backpropagation w.r.t. the Gaussian latent variables z. Specifically, we can express z as a deterministic variable z = gφ(ϵ, x) where ϵ is an independent Gaussian noise variable ϵ ∼N(0, 1). The mean µ and the variance σ2 of z are reparameterized by the differentiable functions w.r.t. φ. Thus, instead of generating z from qφ(z|x), we sample the auxiliary variable ϵ and obtain z = µφ(x)+σφ(x)◦ϵ, which enables gradients to backpropagate through φ. 3.2 Multi-space Variational Autoencoders As an intermediate step to our full model, we next describe a generative model for a single sequence with both continuous and discrete latent variables, the multi-space variational auto-encoder (MSVAE). MSVAEs are a combination of two threads of previous work: deep generative models with both continuous/discrete latent variables for classification problems (Kingma et al., 2014; Maaløe et al., 2016) and VAEs with only continuous variables for sequential data (Bowman et al., 2016; Chung et al., 2015; Zhang et al., 2016; Fabius and van Amersfoort, 2014; Bayer and Osendorfer, 2014). In MSVAEs, we have an observed sequence x, continuous latent variables z like the VAE, as well as discrete variables y. In the case of the morphology example, x can be interpreted as an inflected word to be generated. y is a vector representing its linguistic labels, either annotated by an annotator in the observed case, or unannotated in the unobserved case. z is a vector of latent continuous variables, e.g. a latent embedding of the lemma that captures all the information about x that is not already represented in labels y. MSVAE: Because inflected words can be naturally thought of as “lemma+morphological labels”, to interpret a word, we resort to discrete and continuous latent variables that represent the linguistic labels and the lemma respectively. In this case when the labels of the sequence y is not observed, we perform inference over possible linguistic labels and these inferred labels are referenced in generating x. The generative model pθ(x, y, z) = p(z)pπ(y)pθ(x|y, z) is defined as: p(z) = N(z|0, I) (3) pπ(y) = Y k Cat(yk|πk) (4) pθ(x|y, z) = f(x; y, z, θ). (5) Like the standard VAE, we assume the prior of the latent variable z is a diagonal Gaussian distribution with zero mean and unit variance. We assume that each variable in y is independent, resulting in a factorized distribution in Eq. 4, where Cat(yk|πk) is a multinomial distribution with parameters πk. For the purposes of this study, we set these to a uniform distribution πk,j = 1 |πk|. f(x; y, z, θ) calculates the likelihood of x, a function parametrized by deep neural networks. Specifically, we employ an RNN decoder to generate the target word conditioned on the lemma variable z and linguistic labels y, detailed in §5. When inferring the latent variables from the given data x, we assume the joint distribution of latent variables z and y has a factorized form, i.e. q(z, y|x) = q(z|x)q(y|x) as shown in Fig. 2(c). 312 The inference model is defined as follows: qφ(z|x) = N(z|µφ(x), diag(σ2 φ(x))) (6) qφ(y|x) = Y k qφ(yk|x) = Y k Cat(yk|πφ(x)) (7) where the inference distribution over z is a diagonal Gaussian distribution with mean and variance parameterized by neural networks. The inference model q(y|x) on labels y has the form of a discriminative classifier that generates a set of multinomial probability vectors πφ(x) over all labels for each tag yk. We represent each multinomial distribution q(yk|x) with an MLP. The MSVAE is trained by maximizing the following variational lower bound U(x) on the objective for unlabeled data: log pθ(x) ≥E(y,z)∼qφ(y,z|x) log pθ(x, y, z) qφ(y, z|x) = Ey∼qφ(y|x)[Ez∼qφ(z|x)[log pθ(x|z, y)] −KL(qφ(z|x)||p(z)) + log pπ(y) −log qφ(y|x)] = U(x) (8) Note that this introduction of discrete variables requires more sophisticated optimization algorithms, which we will discuss in §4.1. Labeled MSVAE: When y is observed as shown in Fig. 2(b), we maximize the following variational lower bound on the marginal log likelihood of the data and the labels: log pθ(x, y) ≥Ez∼qφ(z|x) log pθ(x, y, z) qφ(z|x) = Ez∼qφ(z|x)[log pθ(x|y, z) + log pπ(y)] −KL(qφ(z|x)||p(z)) (9) which is a simple extension to Eq. 2. Note that when labels are not observed, the inference model qφ(y|x) has the form of a discriminative classifier, thus we can use observed labels as the supervision signal to learn a better classifier. In this case we also minimize the following cross entropy as the classification loss: D(x, y) = E(x,y)∼pl(x,y)[−log qφ(y|x)] (10) where pl(x, y) is the distribution of labeled data. This is a form of multi-task learning, as this additional loss also informs the learning of our representations. 3.3 Multi-space Variational Encoder-Decoders Finally, we discuss the full proposed method: the multi-space variational encoder-decoder (MSVED), which generates the target x(t) from the source x(s) and labels y(t). Again, we discuss two cases of this model: labels of the target sequence are observed and not observed. MSVED: The graphical model for the MSVED is given in Fig. 2 (e). Because the labels of target sequence are not observed, once again we treat them as discrete latent variables and make inference on the these labels conditioned on the target sequence. The generative process for the MSVED is very similar to that of the MSVAE with one important exception: while the standard MSVAE conditions the recognition model q(z|x) on x, then generates x itself, the MSVED conditions the recognition model q(z|x(s)) on the source x(s), then generates the target x(t). Because only the recognition model is changed, the generative equations for pθ(x(t), y(t), z) are exactly the same as Eqs. 3–5 with x(t) swapped for x and y(t) swapped for y. The variational lower bound on the conditional log likelihood, however, is affected by the recognition model, and thus is computed as: log pθ(x(t)|x(s)) ≥E(y(t),z)∼qφ(y(t),z|x(s),x(t)) log pθ(x(t), y(t), z|x(s)) qφ(y(t), z|x(s), x(t)) =Ey(t)∼qφ(y(t)|x(t))[Ez∼qφ(z|x(s))[log pθ(x(t)|y(t), z)] −KL(qφ(z|x(s))||p(z)) + log pπ(y(t)) −log qφ(y(t)|x(t))] = Lu(x(t)|x(s)) (11) Labeled MSVED: When the complete form of x(s), y(t), and x(t) is observed in our training data, the graphical model of the labeled MSVED model is illustrated in Fig. 2 (d). We maximize the variational lower bound on the conditional log likelihood of observing x(t) and y(t) as follows: log pθ(x(t), y(t)|x(s)) ≥Ez∼qφ(z|x(s)) log pθ(x(t), y(t), z|x(s)) qφ(z|x(s)) = Ez∼qφ(z|x(s))[log pθ(x(t)|y(t), z) + log pπ(y(t))]− KL(qφ(z|x(s))||p(z)) = Ll(x(t), y(t)|x(s)) (12) 4 Learning MSVED Now that we have described our overall model, we discuss details of the learning process that prove 313 useful to its success. 4.1 Learning Discrete Latent Variables One challenge in training our model is that it is not trivial to perform back-propagation through discrete random variables, and thus it is difficult to learn in the models containing discrete tags such as MSVAE or MSVED.4 To alleviate this problem, we use the recently proposed Gumbel-Softmax trick (Maddison et al., 2014; Gumbel and Lieblein, 1954) to create a differentiable estimator for categorical variables. The Gumbel-Max trick (Gumbel and Lieblein, 1954) offers a simple way to draw samples from a categorical distribution with class probabilities π1, π2, · · · by using the argmax operation as follows: one hot(arg maxi[gi + log πi]), where g1, g2, · · · are i.i.d. samples drawn from the Gumbel(0,1) distribution.5 When making inferences on the morphological labels y1, y2, · · · , the GumbelMax trick can be approximated by the continuous softmax function with temperature τ to generate a sample vector ˆyi for each label i: ˆyij = exp((log(πij) + gij)/τ) PNi k=1 exp((log(πik) + gik)/τ (13) where Ni is the number of classes of label i. When τ approaches zero, the generated sample ˆyi becomes a one-hot vector. When τ > 0, ˆyi is smooth w.r.t πi. In experiments, we start with a relatively large temperature and decrease it gradually. 4.2 Learning Continuous Latent Variables MSVED aims at generating the target sequence conditioned on the latent variable z and the target labels y(t). This requires the encoder to generate an informative representation z encoding the content of the x(s). However, the variational lower bound in our loss function contains the KL-divergence between the approximate posterior qφ(z|x) and the prior p(z), which is relatively easy to learn compared with learning to generate output from a latent representation. We observe that with the vanilla implementation the KL cost quickly decreases to near zero, setting qφ(z|x) equal to standard normal distribution. In 4 Kingma et al. (2014) solve this problem by marginalizing over all labels, but this is infeasible in our case where we have an exponential number of label combinations. 5The Gumbel (0,1) distribution can be sampled by first drawing u ∼ Uniform(0,1) and computing g = −log(−log(u)). this case, the RNN decoder can easily rely on the true output of last time step during training to decode the next token, which degenerates into an RNN language model. Hence, the latent variables are ignored by the decoder and cannot encode any useful information. The latent variable z learns an undesirable distribution that coincides with the imposed prior distribution but has no contribution to the decoder. To force the decoder to use the latent variables, we take the following two approaches which are similar to Bowman et al. (2016). KL-Divergence Annealing: We add a coefficient λ to the KL cost and gradually anneal it from zero to a predefined threshold λm. At the early stage of training, we set λ to be zero and let the model first figure out how to project the representation of the source sequence to a roughly right point in the space and then regularize it with the KL cost. Although we are not optimizing the tight variational lower bound, the model balances well between generation and regularization. This technique can also be seen in (Koˇcisk`y et al., 2016; Miao and Blunsom, 2016). Input Dropout in the Decoder: Besides annealing the KL cost, we also randomly drop out the input token with a probability of β at each time step of the decoder during learning. The previous ground-truth token embedding is replaced with a zero vector when dropped. In this way, the RNN decoder could not fully rely on the ground-truth previous token, which ensures that the decoder uses information encoded in the latent variables. 5 Architecture for Morphological Reinflection Training details: For the morphological reinflection task, our supervised training data consists of source x(s), target x(t), and target tags y(t). We test three variants of our model trained using different types of data and different loss functions. First, the single-directional supervised model (SDSup) is purely supervised: it only decodes the target word from the given source word with the loss function Ll(x(t), y(t)|x(s)) from Eq. 12. Second, the bi-directional supervised model (BDSup) is trained in both directions: decoding the target word from the source word and decoding the source word from the target word, which corresponds to the loss function Ll(x(t), y(t)|x(s)) + Lu(x(s)|x(t)) using Eqs. 11–12. Finally, the semisupervised model (Semi-sup) is trained to maxi314 Proposed MSVED Baseline MED Language #LD #ULD SD-Sup BD-Sup Semi-sup Single Ensemble Turkish 12,798 29,608 93.25 95.66† 97.25‡ 89.56 95.00 Hungarian 19,200 34,025 97.00 98.54† 99.16‡ 96.46 98.37 Spanish 12,799 72,151 88.32 91.50 93.74 94.74†‡ 96.69 Russian 12,798 67,691 75.77 83.07 86.80‡ 83.55† 87.13 Navajo 12,635 6,839 85.00 95.37† 98.25‡ 63.62 83.00 Maltese 19,200 46,918 84.83 88.21† 88.46‡ 79.59 84.25 Arabic 12,797 53,791 79.13 92.62† 93.37‡ 72.58 82.80 Georgian 12,795 46,562 89.31 93.63† 95.97‡ 91.06 96.21 German 12,777 56,246 75.55 84.08 90.28‡ 89.11† 92.41 Finnish 12,800 74,687 75.59 85.11 91.20‡ 85.63† 93.18 Avg. Acc – – 84.38 90.78† 93.45‡ 84.59 90.90 Table 1: Results for Task 3 of SIGMORPHON 2016 on Morphology Reinflection. † represents the best single supervised model score, ‡ represents the best model including semi-supervised models, and bold represents the best score overall. #LD and #ULD are the number of supervised data and unlabeled words respectively. k a l b ⌃(x) µ(x) ✏⇠N(0, 1) z <w> k k ä + yT 1 yT 2 yT 3 yT 4 .... ...... k a l b ⌃(x) µ(x) ✏⇠N(0, 1) z <w> k k a Multinomial Sampling + ...... y1 2 {pos: V, N, ADJ}.. y2 2 {def: DEF, INDEF} y3 2 {num: DU, SG, PL}... ... ... Source Word Reinflected Form Source Word Source Word Supervised Variational Encoder Decoder Unsupervised Variational Auto-encoder y1 y2 y3 y4 · · · Figure 3: Model architecture for labeled and unlabeled data. For the encoder-decoder model, only one direction from the source to target is given. The classification model is not illustrated in the diagram. mize the variational lower bounds and minimize the classification cross-entropy error of 10. L(x(s), x(t), y(t), x) = α · U(x) + Lu(x(s)|x(t)) + Ll(x(t), y(t)|x(s)) −D(x(t), y(t)) (14) The weight α controls the relative weight between the loss from unlabeled data and labeled data. We use Monte Carlo methods to estimate the expectation over the posterior distribution q(z|x) and q(y|x) inside the objective function 14. Specifically, we draw Gumbel noise and Gaussian noise one at a time to compute the latent variables y and z. The overall model architecture is shown in Fig. 3. Each character and each label is associated with a continuous vector. We employ Gated Recurrent Units (GRUs) for the encoder and decoder. Let −→ ht and ←− ht denote the hidden state of the forward and backward encoder RNN at time step t. u is the hidden representation of x(s) concatenating the last hidden state from both directions i.e. [−→ hT ; ←− hT ] where T is the word length. u is used as the input for the inference model on z. We represent µ(u) and σ2(u) as MLPs and sample z from N(µ(u), diag(σ2(u))), using z = µ + σ ◦ϵ, where ϵ ∼N(0, I). Similarly, we can obtain the hidden representation of x(t) and use this as input to the inference model on each label y(t) i which is also an MLP following a softmax layer to generate the categorical probabilities of target labels. In decoding, we use 3 types of information in calculating the probability of the next character : (1) the current decoder state, (2) a tag context vector using attention (Bahdanau et al., 2015) over the tag embeddings, and (3) the latent variable z. The intuition behind this design is that we would like the model to constantly consider the lemma represented by z, and also reference the tag corresponding to the current morpheme being generated at this point. We do not marginalize over the latent variable z however, instead we use the mode µ of z as the latent representation for z. We use beam search with a beam size of 8 to perform search over the character vocabulary at each decoding time step. Other experimental setups: All hyperparameters are tuned on the validation set, and include the following: For KL cost annealing, λm is set to be 0.2 for all language settings. For character drop-out at the decoder, we empirically set β to be 0.4 for all languages. We set the dimension of character embeddings to be 300, tag label embeddings to be 200, RNN hidden state to be 256, and 315 latent variable z to be 150. We set α the weight for the unsupervised loss to be 0.8. We train the model with Adadelta (Zeiler, 2012) and use earlystop with a patience of 10. 6 Experiments 6.1 Background: SIGMORPHON 2016 SIGMORPHON 2016 is a shared task on morphological inflection over 10 different morphologically rich languages. There are a total of three tasks, the most difficult of which is task 3, which requires the system to output the reinflection of an inflected word.6 The training data format in task 3 is in triples: (source word, target labels, target word). In the test phase, the system is asked to generate the target word given a source word and the target labels. There are a total of three tracks for each task, divided based the amount of supervised data that can be used to solve the problem, among which track 2 has the strictest limitation of only using data for the corresponding task. As this is an ideal testbed for our method, which can learn from unlabeled data, we choose track 2 and task 3 to test our our model’s ability to exploit this data. As a baseline, we compare our results with the MED system (Kann and Sch¨utze, 2016a) which achieved state-of-the-art results in the shared task. This system used an encoder-decoder model with attention on the concatenated source word and target labels. Its best result is obtained from an ensemble of five RNN encoder-decoders (Ensemble). To make a fair comparison with our models, which don’t use ensembling, we also calculated single model results (Single). All models are trained using the labeled training data provided for task 3. For our semi-supervised model (Semi-sup), we also leverage unlabeled data from the training and validation data for tasks 1 and 2 to train variational auto-encoders. 6.2 Results and Analysis From the results in Tab. 1, we can glean a number of observations. First, comparing the results of our full Semi-sup model, we can see that for all languages except Spanish, it achieves accuracies better than the single MED system, often by a large margin. Even compared to the MED ensembled model, our single-model system is quite competitive, achieving higher accuracies for Hungarian, 6Task 1 is inflection of a lemma word and task 2 is reinflection but also provides the source word labels. Language Prefix Stem Suffix Turkish 0.21 1.12 98.75 Hungarian 0.00 0.08 99.79 Spanish 0.09 3.25 90.74 Russian 0.66 7.70 85.00 Navajo 77.64 18.38 26.40 Maltese 48.81 11.05 98.74 Arabic 68.52 37.04 88.24 Georgian 4.46 0.41 92.47 German 0.84 3.32 89.19 Finnish 0.02 12.33 96.16 Table 2: Percentage of inflected word forms that have modified each part of the lemma (Cotterell et al., 2016) (some words can be inflected zero or multiple times, thus sums may not add to 100%). 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Percentage of suffixing inflection 60 65 70 75 80 85 90 95 100 Accuracy Navajo Arabic Maltese Finnish Russian Georgian German Spanish Turkish Hungarian MSVED MED (1) Figure 4: Performance on test data w.r.t. the percentage of suffixing inflection. Points with the same x-axis value correspond to the same language results. Navajo, Maltese, and Arabic, as well as achieving average accuracies that are state-of-the-art. Next, comparing the different varieties of our proposed models, we can see that the semisupervised model consistently outperforms the bidirectional model for all languages. And similarly, the bidirectional model consistently outperforms the single direction model. From these results, we can conclude that the unlabeled data is beneficial to learn useful latent variables that can be used to decode the corresponding word. Examining the linguistic characteristics of the models in which our model performs well provides even more interesting insights. Cotterell et al. (2016) estimate how often the inflection process involves prefix changes, stem-internal changes or suffix changes, the results of which are shown in Tab. 2. Among the many languages, the inflection processes of Arabic, Maltese and Navajo are relatively diverse, and contain a large amount of all three forms of inflection. By examining the experimental results together with the morphological inflection process of different languages, we found that among all the languages, Navajo, Maltese and Arabic obtain the largest gains in performance compared with the ensem316 a l - - i m¯a r ¯a t i y y ¯a t u def=DEF gen=FEM voice=None aspect=None tense=None num=PL poss=None per=None pos=ADJ mood=None case=NOM n ´ı d a j i d l e e h arg=None aspect= IPFV/PROG num=PL per=4 pos=V mood=REAL 0.0 0.2 0.4 0.6 0.8 Figure 5: Two examples of attention weights on target linguistic labels: Arabic (Left) and Navajo (Right). When a tag equals None, it means the word does not have this tag. bled MED system. To demonstrate this visually, in Fig. 4, we compare the semi-supervised MSVED with the MED single model w.r.t. the percentage of suffixing inflection of each language, showing this clear trend. This strongly demonstrates that our model is agnostic to different morphological inflection forms whereas the conventional encoder-decoder with attention on the source input tends to perform better on suffixing-oriented morphological inflection. We hypothesize that for languages that the inflection mostly comes from suffixing, transduction is relatively easy because the source and target words share the same prefix and the decoder can copy the prefix of the source word via attention. However, for languages in which different inflections of a lemma go through different morphological processes, the inflected word and the target word may differ greatly and thus it is crucial to first analyze the lemma of the inflected word before generating the corresponding the reinflection form based on the target labels. This is precisely what our model does by extracting the lemma representation z learned by the variational inference model. 6.3 Analysis on Tag Attention To analyze how the decoder attends to the linguistic labels associated with the target word, we randomly pick two words from the Arabic and Navajo test set and plot the attention weight in Fig. 5. The Arabic word “al-’im¯ar¯atiyy¯atu” is an adjective which means “Emirati”, and its source word in the test data is “’im¯ar¯atiyyin” 7. Both of these are declensions of “’im¯ar¯atiyy”. The source word is 7https://en.wiktionary.org/wiki/%D8% A5%D9%85%D8%A7%D8%B1%D8%A7%D8%AA%D9%8A Figure 6: Visualization of latent variables z for Maltese with 35 pseudo-lemma groups in the figure. singular, masculine, genitive and indefinite, while the required inflection is plural, feminine, nominative and definite. We can see from the left heat map that the attention weights are turned on at several positions of the word when generating corresponding inflections. For example, “al-” in Arabic is the definite article that marks definite nouns. The same phenomenon can also be observed in the Navajo example, as well as other languages, but due to space limitation, we don’t provide detailed analysis here. 6.4 Visualization of Latent Lemmas To investigate the learned latent representations, in this section we visualize the z vectors, examining whether the latent space groups together words with the same lemma. Each sample in SIGMORPHON 2016 contains source word and target words which share the same lemma. We run a heuristic process to assign pairs of words to groups that likely share a lemma by grouping together word pairs for which at least one of the words in each pair shares a surface form. This process is not error free – errors may occur in the case where multiple lemmas share the same surface form – but in general the groupings will generally reflect lemmas except in these rare erroneous cases, so we dub each of these groups a pseudo-lemma. In Fig. 6, we randomly pick 1500 words from Maltese and visualize the continuous latent vectors of these words. We compute the latent vectors as µφ(x) in the variational posterior inference (Eq. 6) without adding the variance. As expected, words that belong to the same pseudo-lemma (in the same color) are projected into adjacent points in the two-dimensional space. This demonstrates that the continuous latent variable captures the canonical form of a set of words and demonstrates the effectiveness of the proposed representation. 317 Language Src Word Tgt Labels Gold Tgt MED Ours Turkish kocama pos=N,poss=PSS1S,case=ESS,num=SG kocamda kocama kocamda yaratmamdan pos=N,case=NOM,num=SG yaratma yaratma yaratman bitimizde pos=N,tense=PST,per=1,num=SG bittik bitiydik bittim Maltese ndammhomli pos=V,polar=NEG,tense=PST,num=SG tindammhiex ndammejthiex tindammhiex tqo˙z˙zhieli pos=V,polar=NEG,tense=PST,num=SG tqo˙z˙zx tqo˙z˙zx qa˙z˙zejtx tissikkmuhomli pos=V,polar=POS,tense=PST,num=PL ssikkmulna tissikkmulna tissikkmulna Finnish verovapaatta pos=ADJ,case=PRT,num=PL verovapaita verovappaita verovapaita turrumme pos=V,mood=POT,tense=PRS,num=PL turtunemme turtunemme turrunemme sukunimin pos=N,case=PRIV,num=PL sukunimitt sukunimeitta sukunimeitta Table 3: Randomly picked output examples on the test data. Within each block, the first, second and third lines are outputs that ours is correct and MED’s is wrong, ours is wrong and MED’s is correct, both are wrong respectively. 6.5 Analyzing Effects of Size of Unlabeled Data From Tab. 1, we can see that semi-supervised learning always performs better than supervised learning without unlabeled data. In this section, we investigate to what extent the size of unlabeled data can help with performance. We process a German corpus from a 2017 Wikipedia dump and obtain more than 100,000 German words. These words are ranked in order of occurrence frequency in Wikipedia. The data contains a certain amount of noise since we did not apply any special processing. We shuffle all unlabeled data from both the Wikipedia and the data provided in the shared task used in previous experiments, and increase the number of unlabeled words used in learning by 10,000 each time, and finally use all the unlabeled data (more than 150,000 words) to train the model. Fig. 7 shows that the performance on the test data improves as the amount of unlabeled data increases, which implies that the unsupervised learning continues to help improve the model’s ability to model the latent lemma representation even as we scale to a noisy, real, and relatively large-scale dataset. Note that the growth rate of the performance grows slower as more data is added, because although the number of unlabeled data is increasing, the model has seen most word patterns in a relatively small vocabulary. 6.6 Case Study on Reinflected Words In Tab. 3, we examine some model outputs on the test data from the MED system and our model respectively. It can be seen that most errors of MED and our models can be ascribed to either over-copy or under-copy of characters. In particular, from the complete outputs we observe that our model tends to be more aggressive in its changes, resulting in 0 1e4 2e4 3e4 5e4 >15e5 # Unlabeled words 83 84 85 86 87 88 89 90 91 92 Accuracy on test data (%) Figure 7: Performance on the German test data w.r.t. the amount of unlabeled Wikipedia data. it performing more complicated transformations, both successfully (such as Maltese “ndammhomli” to “tindammhiex”) and unsuccessfully (“tqo˙z˙zx” to “qa˙z˙zejtx”). In contrast, the attentional encoderdecoder model is more conservative in its changes, likely because it is less effective in learning an abstracted representation for the lemma, and instead copies characters directly from the input. 7 Conclusion and Future Work In this work, we propose a multi-space variational encoder-decoder framework for labeled sequence transduction problem. The MSVED performs well in the task of morphological reinflection, outperforming the state of the art, and further improving with the addition of external unlabeled data. Future work will adapt this framework to other sequence transduction scenarios such as machine translation, dialogue generation, question answering, where continuous and discrete latent variables can be abstracted to guide sequence generation. Acknowledgments The authors thank Jiatao Gu, Xuezhe Ma, Zihang Dai and Pengcheng Yin for their helpful discussions. This work has been supported in part by an Amazon Academic Research Award. 318 References I˜naki Alegria and Izaskun Etxeberria. 2016. Ehu at the sigmorphon 2016 shared task. a simple proposal: Grapheme-to-phoneme for inflection. In Proceedings of the 2016 Meeting of SIGMORPHON . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. The International Conference on Learning Representations . Justin Bayer and Christian Osendorfer. 2014. Learning stochastic recurrent networks. arXiv preprint arXiv:1411.7610 . Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. Proceedings of CoNLL . Victor Chahuneau, Eva Schlinger, Noah A Smith, and Chris Dyer. 2013. Translating into morphologically rich languages with synthetic phrases. Association for Computational Linguistics. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. 2015. A recurrent latent variable model for sequential data. In Advances in neural information processing systems. pages 2980–2988. R. Cotterell, C. Kirov, J. Sylak-Glassman, D. Yarowsky, J. Eisner, and M. Hulden. 2016. The sigmorphon 2016 shared taskmorphological reinflection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Kareem Darwish and Douglas W Oard. 2007. Adapting morphology for arabic information retrieval. In Arabic Computational Morphology, Springer, pages 245–262. Carl Doersch. 2016. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908 . Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Atlanta, Georgia, pages 1185–1195. Otto Fabius and Joost R van Amersfoort. 2014. Variational recurrent auto-encoders. arXiv preprint arXiv:1412.6581 . Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 634–643. Emil Julius Gumbel and Julius Lieblein. 1954. Statistical theory of extreme values and some practical applications: a series of lectures. US Government Printing Office Washington . Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2016. Google’s multilingual neural machine translation system: Enabling zero-shot translation. arXiv preprint arXiv:1611.04558 . Katharina Kann, Ryan Cotterell, and Hinrich Sch¨utze. 2016. Neural multi-source morphological reinflection. arXiv preprint arXiv:1612.06027 . Katharina Kann and Hinrich Sch¨utze. 2016a. Med: The lmu system for the sigmorphon 2016 shared task on morphological reinflection. In In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. Berlin, Germany. Katharina Kann and Hinrich Sch¨utze. 2016b. Singlemodel encoder-decoder with explicit morphological representation for reinflection. In In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin, Germany. Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Controlling output length in neural encoder-decoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1328–1338. Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems. Montr´eal, Canada, pages 3581–3589. D.P. Kingma and M. Welling. 2014. Auto-encoding variational bayes. In The International Conference on Learning Representations. Tom´aˇs Koˇcisk`y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP) . Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. 2016. Auxiliary deep generative models. Proceedings of the 33rd International Conference on Machine Learning . Chris J Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Advances in Neural Information Processing Systems. pages 3086–3094. 319 Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP) . Garrett Nicolai, Bradley Hauer, Adam St. Arnaud, and Grzegorz Kondrak. 2016. Morphological reinflection via discriminative string transduction. In Proceedings of the 2016 Meeting of SIGMORPHON . Robert Ostling. 2016. Morphological reinflection with convolutional neural networks. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology page 23. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 2016 Conference of The North American Chapter of the Association for Computational Linguistics (NAACL). pages 35–40. Dima Taji, Ramy Eskander, Nizar Habash, and Owen Rambow. 2016. The columbia university - new york university abu dhabi sigmorphon 2016 morphological reinflection shared task submission. In Proceedings of the 2016 Meeting of SIGMORPHON . Keiichi Tokuda, Takashi Masuko, Noboru Miyazaki, and Takao Kobayashi. 2002. Multi-space probability distribution hmm. IEICE TRANSACTIONS on Information and Systems 85(3):455–464. Kristina Toutanova, Hisami Suzuki, and Achim Ruopp. 2008. Applying morphology generation models to machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. pages 514–522. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, and Min Zhang. 2016. Variational neural machine translation. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics . 320
2017
29
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 23–33 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1003 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 23–33 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1003 Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision Chen Liang⇤, Jonathan Berant†, Quoc Le, Kenneth D. Forbus, Ni Lao Northwestern University, Evanston, IL Tel-Aviv University, Tel Aviv-Yafo, Israel Google Inc., Mountain View, CA {chenliang2013,forbus}@u.northwestern.edu, [email protected], {qvl,nlao}@google.com Abstract Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine (NSM), which contains (a) a neural “programmer”, i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic “computer”, i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-theart on the WEBQUESTIONSSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge. 1 Introduction Deep neural networks have achieved impressive performance in supervised classification and structured prediction tasks such as speech recognition (Hinton et al., 2012), machine translation (Bahdanau et al., 2014; Wu et al., 2016) and more. However, training neural networks for semantic parsing (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Liang et al., 2011) or program induction, where language is mapped to a sym⇤Work done while the author was interning at Google † Work done while the author was visiting Google x: Largest city in the US ⇒ y: NYC (USA) (Hop v0 CityIn) CityIn (Argmax v1 Population) Population Compositionality Large Search Space ( Argmax Hop v1 v0 Population Size Elevation ) v2 ← v1 ← v0 ← Figure 1: The main challenges of training a semantic parser from weak supervision: (a) compositionality: we use variables (v0, v1, v2) to store execution results of intermediate generated programs. (b) search: we prune the search space and augment REINFORCE with pseudo-gold programs. bolic representation that is executed by an executor, through weak supervision remains challenging. This is because the model must interact with a symbolic executor through non-differentiable operations to search over a large program space. In semantic parsing, recent work handled this (Dong and Lapata, 2016; Jia and Liang, 2016) by training from manually annotated programs and avoiding program execution at training time. However, annotating programs is known to be expensive and scales poorly. In program induction, attempts to address this problem (Graves et al., 2014; Reed and de Freitas, 2016; Kaiser and Sutskever, 2015; Graves et al., 2016b; Andreas et al., 2016) either utilized low-level memory (Zaremba and Sutskever, 2015), or required memory to be differentiable (Neelakantan et al., 2015; Yin et al., 2015) so that the model can be trained with backpropagation. This makes it difficult to use the efficient discrete operations and memory of a traditional computer, and limited the application to synthetic or small knowledge bases. In this paper, we propose to utilize the memory and discrete operations of a traditional com23 puter in a novel Manager-Programmer-Computer (MPC) framework for neural program induction, which integrates three components: 1. A “manager” that provides weak supervision (e.g., ‘NYC’ in Figure 1) through a reward indicating how well a task is accomplished. Unlike full supervision, weak supervision is easy to obtain at scale (Section 3.1). 2. A “programmer” that takes natural language as input and generates a program that is a sequence of tokens (Figure 2). The programmer learns from the reward and must overcome the hard search problem of finding correct programs (Section 2.2). 3. A “computer” that executes programs in a high level programming language. Its nondifferentiable memory enables abstract, scalable and precise operations, but makes training more challenging (Section 2.3). To help the “programmer” prune the search space, it provides a friendly neural computer interface, which detects and eliminates invalid choices (Section 2.1). Within this framework, we introduce the Neural Symbolic Machine (NSM) and apply it to semantic parsing. NSM contains a neural sequenceto-sequence (seq2seq) “programmer” (Sutskever et al., 2014) and a symbolic non-differentiable Lisp interpreter (“computer”) that executes programs against a large knowledge-base (KB). Our technical contribution in this work is threefold. First, to support language compositionality, we augment the standard seq2seq model with a key-variable memory to save and reuse intermediate execution results (Figure 1). This is a novel application of pointer networks (Vinyals et al., 2015) to compositional semantics. Second, to alleviate the search problem of finding correct programs when training from questionanswer pairs,we use the computer to execute partial programs and prune the programmer’s search space by checking the syntax and semantics of generated programs. This generalizes the weakly supervised semantic parsing framework (Liang et al., 2011; Berant et al., 2013) by leveraging semantic denotations during structural search. Third, to train from weak supervision and directly maximize the expected reward we turn to the REINFORCE (Williams, 1992) algorithm. Since learning from scratch is difficult for REINFORCE, we combine it with an iterative maximum likelihood (ML) training process, where beam search is used to find pseudo-gold programs, which are then used to augment the objective of REINFORCE. On the WEBQUESTIONSSP dataset (Yih et al., 2016), NSM achieves new state-of-the-art results with weak supervision, significantly closing the gap between weak and full supervision for this task. Unlike prior works, it is trained end-toend, and does not require feature engineering or domain-specific knowledge. 2 Neural Symbolic Machines We now introduce NSM by first describing the “computer”, a non-differentiable Lisp interpreter that executes programs against a large KB and provides code assistance (Section 2.1). We then propose a seq2seq model (“programmer”) that supports compositionality using a key-variable memory to save and reuse intermediate results (Section 2.2). Finally, we describe a training procedure that is based on REINFORCE, but is augmented with pseudo-gold programs found by an iterative ML training procedure (Section 2.3). Before diving into details, we define the semantic parsing task: given a knowledge base K, and a question x = (w1, w2, ..., wm), produce a program or logical form z that when executed against K generates the right answer y. Let E denote a set of entities (e.g., ABELINCOLN),1 and let P denote a set of properties (e.g., PLACEOFBIRTH). A knowledge base K is a set of assertions or triples (e1, p, e2) 2 E ⇥P ⇥E, such as (ABELINCOLN, PLACEOFBIRTH, HODGENVILLE). 2.1 Computer: Lisp Interpreter with Code Assistance Semantic parsing typically requires using a set of operations to query the knowledge base and process the results. Operations learned with neural networks such as addition and sorting do not perfectly generalize to inputs that are larger than the ones observed in the training data (Graves et al., 2014; Reed and de Freitas, 2016). In contrast, operations implemented in high level programming languages are abstract, scalable, and precise, thus generalizes perfectly to inputs of arbitrary size. Based on this observation, we implement operations necessary for semantic parsing with an or1We also consider numbers (e.g., “1.33”) and date-times (e.g., “1999-1-1”) as entities. 24 dinary programming language instead of trying to learn them with a neural network. We adopt a Lisp interpreter as the “computer”. A program C is a list of expressions (c1...cN), where each expression is either a special token “Return” indicating the end of the program, or a list of tokens enclosed by parentheses “(FA1...AK)”. F is a function, which takes as input K arguments of specific types. Table 1 defines the semantics of each function and the types of its arguments (either a property p or a variable r). When a function is executed, it returns an entity list that is the expression’s denotation in K, and save it to a new variable. By introducing variables that save the intermediate results of execution, the program naturally models language compositionality and describes from left to right a bottom-up derivation of the full meaning of the natural language input, which is convenient in a seq2seq model (Figure 1). This is reminiscent of the floating parser (Wang et al., 2015; Pasupat and Liang, 2015), where a derivation tree that is not grounded in the input is incrementally constructed. The set of programs defined by our functions is equivalent to the subset of λ-calculus presented in (Yih et al., 2015). We did not use full Lisp programming language here, because constructs like control flow and loops are unnecessary for most current semantic parsing tasks, and it is simple to add more functions to the model when necessary. To create a friendly neural computer interface, the interpreter provides code assistance to the programmer by producing a list of valid tokens at each step. First, a valid token should not cause a syntax error: e.g., if the previous token is “(”, the next token must be a function name, and if the previous token is “Hop”, the next token must be a variable. More importantly, a valid token should not cause a semantic (run-time) error: this is detected using the denotation saved in the variables. For example, if the previously generated tokens were “( Hop r”, the next available token is restricted to properties {p | 9e, e0 : e 2 r, (e, p, e0) 2 K} that are reachable from entities in r in the KB. These checks are enabled by the variables and can be derived from the definition of the functions in Table 1. The interpreter prunes the “programmer”’s search space by orders of magnitude, and enables learning from weak supervision on a large KB. 2.2 Programmer: Seq2seq Model with Key-Variable Memory Given the “computer”, the “programmer” needs to map natural language into a program, which is a sequence of tokens that reference operations and values in the “computer”. We base our programmer on a standard seq2seq model with attention, but extend it with a key-variable memory that allows the model to learn to represent and refer to program variables (Figure 2). Sequence-to-sequence models consist of two RNNs, an encoder and a decoder. We used a 1-layer GRU (Cho et al., 2014) for both the encoder and decoder. Given a sequence of words w1, w2...wm, each word wt is mapped to an embedding qt (embedding details are in Section 3). Then, the encoder reads these embeddings and updates its hidden state step by step using ht+1 = GRU(ht, qt, ✓Encoder), where ✓Encoder are the GRU parameters. The decoder updates its hidden states ut by ut+1 = GRU(ut, ct−1, ✓Decoder), where ct−1 is the embedding of last step’s output token at−1, and ✓Decoder are the GRU parameters. The last hidden state of the encoder hT is used as the decoder’s initial state. We also adopt a dot-product attention similar to Dong and Lapata (2016). The tokens of the program a1, a2...an are generated one by one using a softmax over the vocabulary of valid tokens at each step, as provided by the “computer” (Section 2.1). To achieve compositionality, the decoder must learn to represent and refer to intermediate variables whose value was saved in the “computer” after execution. Therefore, we augment the model with a key-variable memory, where each entry has two components: a continuous embedding key vi, and a corresponding variable token Ri referencing the value in the “computer” (see Figure 2). During encoding, we use an entity linker to link text spans (e.g., “US”) to KB entities. For each linked entity we add a memory entry where the key is the average of GRU hidden states over the entity span, and the variable token (R1) is the name of a variable in the computer holding the linked entity (m.USA) as its value. During decoding, when a full expression is generated (i.e., the decoder generates “)”), it gets executed, and the result is stored as the value of a new variable in the “computer”. This variable is keyed by the GRU hidden state at that step. When a new variable R1 with key embedding v1 is added into the key-variable memory, 25 ( Hop r p ) ) {e2|e1 2 r, (e1, p, e2) 2 K} ( ArgMax r p ) ) {e1|e1 2 r, 9e2 2 E : (e1, p, e2) 2 K, 8e : (e1, p, e) 2 K, e2 ≥e} ( ArgMin r p ) ) {e1|e1 2 r, 9e2 2 E : (e1, p, e2) 2 K, 8e : (e1, p, e) 2 K, e2 e} ( Filter r1 r2 p ) ) {e1|e1 2 r1, 9e2 2 r2 : (e1, p, e2) 2 K} Table 1: Interpreter functions. r represents a variable, p a property in Freebase. ≥and are defined on numbers and dates. Key Variable v1 R1(m.USA) Execute ( Argmax R2 Population ) Execute Return m.NYC Key Variable ... ... v3 R3(m.NYC) Key Variable v1 R1(m.USA) v2 R2(list of US cities) Execute ( Hop R1 !CityIn ) Hop R1 !CityIn ( ) Largest city ( Hop R1 in US GO !CityIn Argmax R2 ( ) Population ) R2 Population Return Argmax ) ( Entity Resolver Figure 2: Semantic Parsing with NSM. The key embeddings of the key-variable memory are the output of the sequence model at certain encoding or decoding steps. For illustration purposes, we also show the values of the variables in parentheses, but the sequence model never sees these values, and only references them with the name of the variable (“R1”). A special token “GO” indicates the start of decoding, and “Return” indicates the end of decoding. the token R1 is added into the decoder vocabulary with v1 as its embedding. The final answer returned by the “programmer” is the value of the last computed variable. Similar to pointer networks (Vinyals et al., 2015), the key embeddings for variables are dynamically generated for each example. During training, the model learns to represent variables by backpropagating gradients from a time step where a variable is selected by the decoder, through the key-variable memory, to an earlier time step when the key embedding was computed. Thus, the encoder/decoder learns to generate representations for variables such that they can be used at the right time to construct the correct program. While the key embeddings are differentiable, the values referenced by the variables (lists of entities), stored in the “computer”, are symbolic and non-differentiable. This distinguishes the keyvariable memory from other memory-augmented neural networks that use continuous differentiable embeddings as the values of memory entries (Weston et al., 2014; Graves et al., 2016a). 2.3 Training NSM with Weak Supervision NSM executes non-differentiable operations against a KB, and thus end-to-end backpropagation is not possible. Therefore, we base our training procedure on REINFORCE (Williams, 1992; Norouzi et al., 2016). When the reward signal is sparse and the search space is large, it is common to utilize some full supervision to pre-train REINFORCE (Silver et al., 2016). To train from weak supervision, we suggest an iterative ML procedure for finding pseudo-gold programs that will bootstrap REINFORCE. REINFORCE We can formulate training as a reinforcement learning problem: given a question x, the state, action and reward at each time step t 2 {0, 1, ..., T} are (st, at, rt). Since the environment is deterministic, the state is defined by the question x and the action sequence: st = (x, a0:t−1), where a0:t−1 = (a0, ..., at−1) is the history of actions at time t. A valid action at time t is at 2 A(st), where A(st) is the set of valid tokens given by the “computer”. Since each action corresponds to a token, the full history a0:T corresponds to a program. The reward rt = I[t = T] · F1(x, a0:T ) is non-zero only at the last step of decoding, and is the F1 score computed comparing the gold answer and the answer generated by executing the program a0:T . Thus, the cumulative reward of a program a0:T is R(x, a0:T ) = X t rt = F1(x, a0:T ). The agent’s decision making procedure at each time is defined by a policy, ⇡✓(s, a) = P✓(at = a|x, a0:t−1), where ✓are the model parameters. Since the environment is deterministic, the probability of generating a program a0:T is P✓(a0:T |x) = Y t P✓(at | x, a0:t−1). We can define our objective to be the expected cumulative reward and use policy gradient meth26 ods such as REINFORCE for training. The objective and gradient are: JRL(✓) = X x EP✓(a0:T |x)[R(x, a0:T )], r✓JRL(✓) = X x X a0:T P✓(a0:T | x) · [R(x, a0:T )− B(x)] · r✓log P✓(a0:T | x), where B(x) = P a0:T P✓(a0:T | x)R(x, a0:T ) is a baseline that reduces the variance of the gradient estimation without introducing bias. Having a separate network to predict the baseline is an interesting future direction. While REINFORCE assumes a stochastic policy, we use beam search for gradient estimation. Thus, in contrast with common practice of approximating the gradient by sampling from the model, we use the top-k action sequences (programs) in the beam with normalized probabilities. This allows training to focus on sequences with high probability, which are on the decision boundaries, and reduces the variance of the gradient. Empirically (and in line with prior work), REINFORCE converged slowly and often got stuck in local optima (see Section 3). The difficulty of training resulted from the sparse reward signal in the large search space, which caused model probabilities for programs with non-zero reward to be very small at the beginning. If the beam size k is small, good programs fall off the beam, leading to zero gradients for all programs in the beam. If the beam size k is large, training is very slow, and the normalized probabilities of good programs when the model is untrained are still very small, leading to (1) near zero baselines, thus near zero gradients on “bad” programs (2) near zero gradients on good programs due to the low probability P✓(a0:T | x). To combat this, we present an alternative training strategy based on maximum-likelihood. Iterative ML If we had gold programs, we could directly optimize their likelihood. Since we do not have gold programs, we can perform an iterative procedure (similar to hard ExpectationMaximization (EM)), where we search for good programs given fixed parameters, and then optimize the probability of the best program found so far. We do decoding on an example with a large beam size and declare abest 0:T (x) to be the pseudogold program, which achieved highest reward with shortest length among the programs decoded on x in all previous iterations. Then, we can optimize the ML objective: JML(✓) = X x log P✓(abest 0:T (x) | x) (1) A question x is not included if we did not find any program with positive reward. Training with iterative ML is fast because there is at most one program per example and the gradient is not weighted by model probability. while decoding with a large beam size is slow, we could train for multiple epochs after each decoding. This iterative process has a bootstrapping effect that a better model leads to a better program abest 0:T (x) through decoding, and a better program abest 0:T (x) leads to a better model through training. Even with a large beam size, some programs are hard to find because of the large search space. A common solution to this problem is to use curriculum learning (Zaremba and Sutskever, 2015; Reed and de Freitas, 2016). The size of the search space is controlled by both the set of functions used in the program and the program length. We apply curriculum learning by gradually increasing both these quantities (see details in Section 3) when performing iterative ML. Nevertheless, iterative ML uses only pseudogold programs and does not directly optimize the objective we truly care about. This has two adverse effects: (1) The best program abest 0:T (x) could be a spurious program that accidentally produces the correct answer (e.g., using the property PLACEOFBIRTH instead of PLACEOFDEATH when the two places are the same), and thus does not generalize to other questions. (2) Because training does not observe full negative programs, the model often fails to distinguish between tokens that are related to one another. For example, differentiating PARENTSOF vs. SIBLINGSOF vs. CHILDRENOF can be challenging. We now present learning where we combine iterative ML with REINFORCE. Augmented REINFORCE To bootstrap REINFORCE, we can use iterative ML to find pseudogold programs, and then add these programs to the beam with a reasonably large probability. This is similar to methods from imitation learning (Ross et al., 2011; Jiang et al., 2012) that define a proposal distribution by linearly interpolating the model distribution and an oracle. 27 Algorithm 1 IML-REINFORCE Input: question-answer pairs D = {(xi, yi)}, mix ratio ↵, reward function R(·), training iterations NML, NRL, and beam sizes BML, BRL. Procedure: Initialize C⇤ x = ; the best program so far for x Initialize model ✓randomly . Iterative ML for n = 1 to NML do for (x, y) in D do C Decode BML programs given x for j in 1...|C| do if Rx,y(Cj) > Rx,y(C⇤ x) then C⇤ x Cj ✓ ML training with DML = {(x, C⇤ x)} Initialize model ✓randomly . REINFORCE for n = 1 to NRL do DRL ; is the RL training set for (x, y) in D do C Decode BRL programs from x for j in 1...|C| do if Rx,y(Cj) > Rx,y(C⇤ x) then C⇤ x Cj C C [ {C⇤ x} for j in 1...|C| do ˆpj (1−↵)· pj P j0 pj0 where pj = P✓(Cj | x) if Cj = C⇤ x then ˆpj ˆpj + ↵ DRL DRL [ {(x, Cj, ˆpj)} ✓ REINFORCE training with DRL Algorithm 1 describes our overall training procedure. We first run iterative ML for NML iterations and record the best program found for every example xi. Then, we run REINFORCE, where we normalize the probabilities of the programs in beam to sum to (1−↵) and add ↵to the probability of the best found program C⇤(xi). Consequently, the model always puts a reasonable amount of probability on a program with high reward during training. Note that we randomly initialized the parameters for REINFORCE, since initializing from the final ML parameters seems to get stuck in a local optimum and produced worse results. On top of imitation learning, our approach is related to the common practice in reinforcement learning (Schaul et al., 2016) to replay rare successful experiences to reduce the training variance and improve training efficiency. This is also similar to recent developments (Wu et al., 2016) in machine translation, where ML and RL objectives are linearly combined, because anchoring the model to some high-reward outputs stabilizes training. 3 Experiments and Analysis We now empirically show that NSM can learn a semantic parser from weak supervision over a large KB. We evaluate on WEBQUESTIONSSP, a challenging semantic parsing dataset with strong baselines. Experiments show that NSM achieves new state-of-the-art performance on WEBQUESTIONSSP with weak supervision, and significantly closes the gap between weak and full supervisions for this task. 3.1 The WEBQUESTIONSSP dataset The WEBQUESTIONSSP dataset (Yih et al., 2016) contains full semantic parses for a subset of the questions from WEBQUESTIONS (Berant et al., 2013), because 18.5% of the original dataset were found to be “not answerable”. It consists of 3,098 question-answer pairs for training and 1,639 for testing, which were collected using Google Suggest API, and the answers were originally obtained using Amazon Mechanical Turk workers. They were updated in (Yih et al., 2016) by annotators who were familiar with the design of Freebase and added semantic parses. We further separated out 620 questions from the training set as a validation set. For query pre-processing we used an in-house named entity linking system to find the entities in a question. The quality of the entity linker is similar to that of (Yih et al., 2015) at 94% of the gold root entities being included. Similar to Dong and Lapata (2016), we replaced named entity tokens with a special token “ENT”. For example, the question “who plays meg in family guy” is changed to “who plays ENT in ENT ENT”. This helps reduce overfitting, because instead of memorizing the correct program for a specific entity, the model has to focus on other context words in the sentence, which improves generalization. Following (Yih et al., 2015) we used the last publicly available snapshot of Freebase (Bollacker et al., 2008). Since NSM training requires random access to Freebase during decoding, we preprocessed Freebase by removing predicates that are not related to world knowledge (starting with “/common/”, “/type/”, “/freebase/”),2 and removing all text valued predicates, which are rarely the answer. Out of all 27K relations, 434 relations are removed during preprocessing. This results in a graph that fits in memory with 23K relations, 82M nodes, and 417M edges. 3.2 Model Details For pre-trained word embeddings, we used the 300 dimension GloVe word embeddings trained on 840B tokens (Pennington et al., 2014). On the encoder side, we added a projection matrix to 2We kept “/common/topic/notable types”. 28 transform the embeddings into 50 dimensions. On the decoder side, we used the same GloVe embeddings to construct an embedding for each property using its Freebase id, and also added a projection matrix to transform this embedding to 50 dimensions. A Freebase id contains three parts: domain, type, and property. For example, the Freebase id for PARENTSOF is “/people/person/parents”. “people” is the domain, “person” is the type and “parents” is the property. The embedding is constructed by concatenating the average of word embeddings in the domain and type name to the average of word embeddings in the property name. For example, if the embedding dimension is 300, the embedding dimension for “/people/person/parents” will be 600. The first 300 dimensions will be the average of the embeddings for “people” and “person”, and the second 300 dimensions will be the embedding for “parents”. The dimension of encoder hidden state, decoder hidden state and key embeddings are all 50. The embeddings for the functions and special tokens (e.g., “UNK”, “GO”) are randomly initialized by a truncated normal distribution with mean=0.0 and stddev=0.1. All the weight matrices are initialized with a uniform distribution in [− p 3 d , p 3 d ] where d is the input dimension. Dropout rate is set to 0.5, and we see a clear tendency for larger dropout rate to produce better performance, indicating overfitting is a major problem for learning. 3.3 Training Details In iterative ML training, the decoder uses a beam of size k = 100 to update the pseudo-gold programs and the model is trained for 20 epochs after each decoding step. We use the Adam optimizer (Kingma and Ba, 2014) with initial learning rate 0.001. In our experiment, this process usually converges after a few (5-8) iterations. For REINFORCE training, the best hyperparameters are chosen using the validation set. We use a beam of size k = 5 for decoding, and ↵is set to 0.1. Because the dataset is small and some relations are only used once in the whole training set, we train the model on the entire training set for 200 iterations with the best hyperparameters. Then we train the model with learning rate decay until convergence. Learning rate is decayed as gt = g0 ⇥β max(0,t−ts) m , where g0 = 0.001, β = 0.5 m = 1000, and ts is the number of training steps at the end of iteration 200. Since decoding needs to query the knowledge base (KB) constantly, the speed bottleneck for training is decoding. We address this problem in our implementation by partitioning the dataset, and using multiple decoders in parallel to handle each partition. We use 100 decoders, which queries 50 KG servers, and one trainer. The neural network model is implemented in TensorFlow. Since the model is small, we didn’t see a significant speedup by using GPU, so all the decoders and the trainer are using CPU only. Inspired by the staged generation process in Yih et al. (2015), curriculum learning includes two steps. We first run iterative ML for 10 iterations with programs constrained to only use the “Hop” function and the maximum number of expressions is 2. Then, we run iterative ML again, but use both “Hop” and “Filter”. The maximum number of expressions is 3, and the relations used by “Hop” are restricted to those that appeared in abest 0:T (q) in the first step. 3.4 Results and discussion We evaluate performance using the offical evaluation script for WEBQUESTIONSSP. Because the answer to a question may contain multiple entities or values, precision, recall and F1 are computed based on the output of each individual question, and average F1 is reported as the main evaluation metric. Accuracy measures the proportion of questions that are answered exactly. A comparison to STAGG, the previous state-ofthe-art model (Yih et al., 2016, 2015), is shown in Table 2. Our model beats STAGG with weak supervision by a significant margin on all metrics, while relying on no feature engineering or handcrafted rules. When STAGG is trained with strong supervision it obtains an F1 of 71.7, and thus NSM closes half the gap between training with weak and full supervision. Model Prec. Rec. F1 Acc. STAGG 67.3 73.1 66.8 58.8 NSM 70.8 76.0 69.0 59.5 Table 2: Results on the test set. Average F1 is the main evaluation metric and NSM outperforms STAGG with no domainspecific knowledge or feature engineering. Four key ingredients lead to the final performance of NSM. The first one is the neural computer interface that provides code assistance by checking for syntax and semantic errors. We find 29 that semantic checks are very effective for opendomain KBs with a large number of properties. For our task, the average number of choices is reduced from 23K per step (all properties) to less than 100 (the average number of properties connected to an entity). The second ingredient is augmented REINFORCE training. Table 3 compares augmented REINFORCE, REINFORCE, and iterative ML on the validation set. REINFORCE gets stuck in local optimum and performs poorly. Iterative ML training is not directly optimizing the F1 measure, and achieves sub-optimal results. In contrast, augmented REINFORCE is able to bootstrap using pseudo-gold programs found by iterative ML and achieves the best performance on both the training and validation set. Settings Train F1 Valid F1 Iterative ML 68.6 60.1 REINFORCE 55.1 47.8 Augmented REINFORCE 83.0 67.2 Table 3: Average F1 on the validation set for augmented REINFORCE, REINFORCE, and iterative ML. The third ingredient is curriculum learning during iterative ML. We compare the performance of the best programs found with and without curriculum learning in Table 4. We find that the best programs found with curriculum learning are substantially better than those found without curriculum learning by a large margin on every metric. Settings Prec. Rec. F1 Acc. No curriculum 79.1 91.1 78.5 67.2 Curriculum 88.6 96.1 89.5 79.8 Table 4: Evaluation of the programs with the highest F1 score in the beam (abest 0:t ) with and without curriculum learning. The last important ingredient is reducing overfitting. Given the small size of the dataset, overfitting is a major problem for training neural network models. We show the contributions of different techniques for controlling overfitting in Table 5. Note that after all the techniques have been applied, the model is still overfitting with training F1@1=83.0% and validation F1@1=67.2%. Among the programs generated by the model, a significant portion (36.7%) uses more than one expression. From Table 6, we can see that the performance doesn’t decrease much as the composiSettings ∆F1@1 −Pretrained word embeddings −5.5 −Pretrained property embeddings −2.7 −Dropout on GRU input and output −2.4 −Dropout on softmax −1.1 −Anonymize entity tokens −2.0 Table 5: Contributions of different overfitting techniques on the validation set. #Expressions 0 1 2 3 Percentage 0.4% 62.9% 29.8% 6.9% F1 0.0 73.5 59.9 70.3 Table 6: Percentage and performance of model generated programs with different complexity (number of expressions). tional depth increases, indicating that the model is effective at capturing compositionality. We observe that programs with three expressions use a more limited set of properties, mainly focusing on answering a few types of questions such as “who plays meg in family guy”, “what college did jeff corwin go to” and “which countries does russia border”. In contrast, programs with two expressions use a more diverse set of properties, which could explain the lower performance compared to programs with three expressions. Error analysis Error analysis on the validation set shows two main sources of errors: 1. Search failure: Programs with high reward are not found during search for pseudo-gold programs, either because the beam size is not large enough, or because the set of functions implemented by the interpreter is insufficient. The 89.5% F1 score in Table 4 indicates that at least 10% of the questions are of this kind. 2. Ranking failure: Programs with high reward exist in the beam, but are not ranked at the top during decoding. Because the training error is low, this is largely due to overfitting or spurious programs. The 67.2% F1 score in Table 3 indicates that about 20% of the questions are of this kind. 4 Related work Among deep learning models for program induction, Reinforcement Learning Neural Turing Machines (RL-NTMs) (Zaremba and Sutskever, 2015) are the most similar to NSM, as a nondifferentiable machine is controlled by a sequence 30 model. Therefore, both models rely on REINFORCE for training. The main difference between the two is the abstraction level of the programming language. RL-NTM uses lower level operations such as memory address manipulation and byte reading/writing, while NSM uses a high level programming language over a large knowledge base that includes operations such as following properties from entities, or sorting based on a property, which is more suitable for representing semantics. Earlier works such as OOPS (Schmidhuber, 2004) has desirable characteristics, for example, the ability to define new functions. These remain to be future improvements for NSM. We formulate NSM training as an instance of reinforcement learning (Sutton and Barto, 1998) in order to directly optimize the task reward of the structured prediction problem (Norouzi et al., 2016; Li et al., 2016; Yu et al., 2017). Compared to imitation learning methods (Daume et al., 2009; Ross et al., 2011) that interpolate a model distribution with an oracle, NSM needs to solve a challenging search problem of training from weak supervisions in a large search space. Our solution employs two techniques (a) a symbolic “computer” helps find good programs by pruning the search space (b) an iterative ML training process, where beam search is used to find pseudogold programs. Wiseman and Rush (Wiseman and Rush, 2016) proposed a max-margin approach to train a sequence-to-sequence scorer. However, their training procedure is more involved, and we did not implement it in this work. MIXER (Ranzato et al., 2015) also proposed to combine ML training and REINFORCE, but they only considered tasks with full supervisions. Berant and Liang (Berant and Liang, 2015) applied imitation learning to semantic parsing, but still requires hand crafted grammars and features. NSM is similar to Neural Programmer (Neelakantan et al., 2015) and Dynamic Neural Module Network (Andreas et al., 2016) in that they all solve the problem of semantic parsing from structured data, and generate programs using similar semantics. The main difference between these approaches is how an intermediate result (the memory) is represented. Neural Programmer and Dynamic-NMN chose to represent results as vectors of weights (row selectors and attention vectors), which enables backpropagation and search through all possible programs in parallel. However, their strategy is not applicable to a large KB such as Freebase, which contains about 100M entities, and more than 20k properties. Instead, NSM chooses a more scalable approach, where the “computer” saves intermediate results, and the neural network only refers to them with variable names (e.g., “R1” for all cities in the US). NSM is similar to the Path Ranking Algorithm (PRA) (Lao et al., 2011) in that semantics is encoded as a sequence of actions, and denotations are used to prune the search space during learning. NSM is more powerful than PRA by 1) allowing more complex semantics to be composed through the use of a key-variable memory; 2) controlling the search procedure with a trained neural network, while PRA only samples actions uniformly; 3) allowing input questions to express complex relations, and then dynamically generating action sequences. PRA can combine multiple semantic representations to produce the final prediction, which remains to be future work for NSM. 5 Conclusion We propose the Manager-Programmer-Computer framework for neural program induction. It integrates neural networks with a symbolic nondifferentiable computer to support abstract, scalable and precise operations through a friendly neural computer interface. Within this framework, we introduce the Neural Symbolic Machine, which integrates a neural sequence-to-sequence “programmer” with key-variable memory, and a symbolic Lisp interpreter with code assistance. Because the interpreter is non-differentiable and to directly optimize the task reward, we apply REINFORCE and use pseudo-gold programs found by an iterative ML training process to bootstrap training. NSM achieves new state-of-the-art results on a challenging semantic parsing dataset with weak supervision, and significantly closes the gap between weak and full supervision. It is trained endto-end, and does not require any feature engineering or domain-specific knowledge. Acknowledgements We thank for discussions and help from Arvind Neelakantan, Mohammad Norouzi, Tom Kwiatkowski, Eugene Brevdo, Lukasz Kaizer, Thomas Strohmann, Yonghui Wu, Zhifeng Chen, Alexandre Lacoste, and John Blitzer. The second author is partially supported by the Israel Science Foundation, grant 942/16. 31 References Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural networks for question answering. CoRR abs/1601.01705. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP. volume 2, page 6. Jonathan Berant and Percy Liang. 2015. Imitation learning of agenda-based semantic parsers. TACL 3:545–558. K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In International Conference on Management of Data (SIGMOD). pages 1247–1250. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1724–1734. http://www.aclweb.org/anthology/D141179. H. Daume, J. Langford, and D. Marcu. 2009. Searchbased structured prediction. Machine Learning 75:297–325. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Association for Computational Linguistics (ACL). Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401 . Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwinska, Sergio G. Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adri ˜A P. Badia, Karl M. Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. 2016a. Hybrid computing using a neural network with dynamic external memory. Nature advance online publication. https://doi.org/10.1038/nature20101. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016b. Hybrid computing using a neural network with dynamic external memory. Nature . Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29(6):82–97. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Association for Computational Linguistics (ACL). J. Jiang, A. Teichert, J. Eisner, and H. Daume. 2012. Learned prioritization for trading off accuracy and speed. In Advances in Neural Information Processing Systems (NIPS). Łukasz Kaiser and Ilya Sutskever. 2015. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228 . Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980. Ni Lao, Tom Mitchell, and William W Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 529–539. Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541 . P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL). pages 590–599. Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. 2015. Neural programmer: Inducing latent programs with gradient descent. CoRR abs/1511.04834. Mohammad Norouzi, Samy Bengio, zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. 2016. Reward augmented maximum likelihood for neural structured prediction. In Advances in Neural Information Processing Systems (NIPS). Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In ACL. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. 32 Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732 . Scott Reed and Nando de Freitas. 2016. Neural programmer-interpreters. In ICLR. S. Ross, G. Gordon, and A. Bagnell. 2011. A reduction of imitation learning and structured prediction to noregret online learning. In Artificial Intelligence and Statistics (AISTATS). Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. 2016. Prioritized experience replay. In International Conference on Learning Representations. Puerto Rico. J¨urgen Schmidhuber. 2004. Optimal ordered problem solver. Machine Learning 54(3):211–254. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484–489. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems (NIPS). Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Association for Computational Linguistics (ACL). Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916 . Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. In Machine Learning. pages 229– 256. Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beamsearch optimization. CoRR abs/1606.02960. http://arxiv.org/abs/1606.02960. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. http://arxiv.org/abs/1609.08144. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Association for Computational Linguistics (ACL). Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Association for Computational Linguistics (ACL). Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2015. Neural enquirer: Learning to query tables. arXiv preprint arXiv:1512.00965 . Adam Yu, Hongrae Lee, and Quoc Le. 2017. Learning to skim text. In ACL. Wojciech Zaremba and Ilya Sutskever. 2015. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521 . M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial Intelligence (AAAI). pages 1050–1055. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI). pages 658– 666. 33
2017
3
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 321–331 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1030 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 321–331 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1030 Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling Zhe Gan∗, Chunyuan Li∗†, Changyou Chen, Yunchen Pu, Qinliang Su, Lawrence Carin Department of Electrical and Computer Engineering, Duke University {zg27, cl319, cc448, yp42, qs15, lcarin}@duke.edu Abstract Recurrent neural networks (RNNs) have shown promising performance for language modeling. However, traditional training of RNNs using back-propagation through time often suffers from overfitting. One reason for this is that stochastic optimization (used for large training sets) does not provide good estimates of model uncertainty. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (also appropriate for large training sets) to learn weight uncertainty in RNNs. It yields a principled Bayesian learning algorithm, adding gradient noise during training (enhancing exploration of the model-parameter space) and model averaging when testing. Extensive experiments on various RNN models and across a broad range of applications demonstrate the superiority of the proposed approach relative to stochastic optimization. 1 Introduction Language modeling is a fundamental task, used for example to predict the next word or character in a text sequence given the context. Recently, recurrent neural networks (RNNs) have shown promising performance on this task (Mikolov et al., 2010; Sutskever et al., 2011). RNNs with Long Short-Term Memory (LSTM) units (Hochreiter and Schmidhuber, 1997) have emerged as a popular architecture, due to their representational power and effectiveness at capturing long-term dependencies. RNNs are usually trained via back-propagation through time (Werbos, 1990), using stochastic op∗Equal contribution. †Corresponding author. timization methods such as stochastic gradient descent (SGD) (Robbins and Monro, 1951); stochastic methods of this type are particularly important for training with large data sets. However, this approach often provides a maximum a posteriori (MAP) estimate of model parameters. The MAP solution is a single point estimate, ignoring weight uncertainty (Blundell et al., 2015; Hern´andezLobato and Adams, 2015). Natural language often exhibits significant variability, and hence such a point estimate may make over-confident predictions on test data. To alleviate overfitting RNNs, good regularization is known as a key factor to successful applications. In the neural network literature, Bayesian learning has been proposed as a principled method to impose regularization and incorporate model uncertainty (MacKay, 1992; Neal, 1995), by imposing prior distributions on model parameters. Due to the intractability of posterior distributions in neural networks, Hamiltonian Monte Carlo (HMC) (Neal, 1995) has been used to provide sample-based approximations to the true posterior. Despite the elegant theoretical property of asymptotic convergence to the true posterior, HMC and other conventional Markov Chain Monte Carlo methods are not scalable to large training sets. This paper seeks to scale up Bayesian learning of RNNs to meet the challenge of the increasing amount of “big” sequential data in natural language processing, leveraging recent advances in stochastic gradient Markov Chain Monte Carlo (SG-MCMC) algorithms (Welling and Teh, 2011; Chen et al., 2014; Ding et al., 2014; Li et al., 2016a,b). Specifically, instead of training a single network, SG-MCMC is employed to train an ensemble of networks, where each network has its parameters drawn from a shared posterior distribution. This is implemented by adding additional 321 Encoding weights Recurrent weights Decoding weights Output Input Hidden Figure 1: Illustration of different weight learning strategies in a single-hidden-layer RNN. Stochastic optimization used for MAP estimation puts fixed values on all weights. Naive dropout is allowed to put weight uncertainty only on encoding and decoding weights, and fixed values on recurrent weights. The proposed SG-MCMC scheme imposes distributions on all weights. gradient noise during training and utilizing model averaging when testing. This simple procedure has the following salutary properties for training neural networks: (i) When training, the injected noise encourages model-parameter trajectories to better explore the parameter space. This procedure was also empirically found effective in Neelakantan et al. (2016). (ii) Model averaging when testing alleviates overfitting and hence improves generalization, transferring uncertainty in the learned model parameters to subsequent prediction. (iii) In theory, both asymptotic and non-asymptotic consistency properties of SG-MCMC methods in posterior estimation have been recently established to guarantee convergence (Chen et al., 2015a; Teh et al., 2016). (iv) SG-MCMC is scalable; it shares the same level of computational cost as SGD in training, by only requiring the evaluation of gradients on a small mini-batch. To the authors’ knowledge, RNN training using SG-MCMC has not been investigated previously, and is a contribution of this paper. We also perform extensive experiments on several natural language processing tasks, demonstrating the effectiveness of SG-MCMC for RNNs, including character/word-level language modeling, image captioning and sentence classification. 2 Related Work Several scalable Bayesian learning methods have been proposed recently for neural networks. These come in two broad categories: stochastic variational inference (Graves, 2011; Blundell et al., 2015; Hern´andez-Lobato and Adams, 2015) and SG-MCMC methods (Korattikara et al., 2015; Li et al., 2016a). While prior work focuses on feed-forward neural networks, there has been little if any research reported for RNNs using SGMCMC. Dropout (Hinton et al., 2012; Srivastava et al., 2014) is a commonly used regularization method for training neural networks. Recently, several works have studied how to apply dropout to RNNs (Pachitariu and Sahani, 2013; Bayer et al., 2013; Pham et al., 2014; Zaremba et al., 2014; Bluche et al., 2015; Moon et al., 2015; Semeniuta et al., 2016; Gal and Ghahramani, 2016b). Among them, naive dropout (Zaremba et al., 2014) can impose weight uncertainty only on encoding weights (those that connect input to hidden units) and decoding weights (those that connect hidden units to output), but not the recurrent weights (those that connect consecutive hidden states). It has been concluded that noise added in the recurrent connections leads to model instabilities, hence disrupting the RNN’s ability to model sequences. Dropout has been recently shown to be a variational approximation technique in Bayesian learning (Gal and Ghahramani, 2016a; Kingma et al., 2015). Based on this, (Gal and Ghahramani, 2016b) proposed a new variant of dropout that can be successfully applied to recurrent layers, where the same dropout masks are shared along time for encoding, decoding and recurrent weights, respectively. Alternatively, we focus on SG-MCMC, which can be viewed as the Bayesian interpretation of dropout from the perspective of posterior sampling (Li et al., 2016c); this also allows imposition of model uncertainty on recurrent layers, enhancing performance. A comparison of naive dropout and SG-MCMC is illustrated in Fig. 1. 3 Recurrent Neural Networks 3.1 RNN as Bayesian Predictive Models Consider data D = {D1, · · · , DN}, where Dn ≜ (Xn, Yn), with input Xn and output Yn. Our goal is to learn model parameters θ to best characterize the relationship from Xn to Yn, with corresponding data likelihood p(D|θ) = QN n=1 p(Dn|θ). In Bayesian statistics, one sets a prior on θ via distribution p(θ). The posterior p(θ|D) ∝p(θ)p(D|θ) reflects the belief concerning the model parameter distribution after observing the data. Given a test input ˜X (with missing output ˜Y), the uncertainty learned in training 322 is transferred to prediction, yielding the posterior predictive distribution: p( ˜Y| ˜X, D)= Z θ p( ˜Y| ˜X, θ)p(θ|D)dθ . (1) When the input is a sequence, RNNs may be used to parameterize the input-output relationship. Specifically, consider input sequence X = {x1, . . . , xT }, where xt is the input data vector at time t. There is a corresponding hidden state vector ht at each time t, obtained by recursively applying the transition function ht = H(ht−1, xt) (specified in Section 3.2; see Fig. 1). The output Y differs depending on the application: a sequence {y1, . . . , yT } in language modeling or a discrete label in sentence classification. In RNNs the corresponding decoding function is p(y|h), described in Section 3.3. 3.2 RNN Architectures The transition function H(·) can be implemented with a gated activation function, such as Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) or a Gated Recurrent Unit (GRU) (Cho et al., 2014). Both the LSTM and GRU have been proposed to address the issue of learning long-term sequential dependencies. Long Short-Term Memory The LSTM architecture addresses the problem of learning longterm dependencies by introducing a memory cell, that is able to preserve the state over long periods of time. Specifically, each LSTM unit has a cell containing a state ct at time t. This cell can be viewed as a memory unit. Reading or writing the cell is controlled through sigmoid gates: input gate it, forget gate ft, and output gate ot. The hidden units ht are updated as it = σ(Wixt + Uiht−1 + bi) , ft = σ(Wfxt + Ufht−1 + bf) , ot = σ(Woxt + Uoht−1 + bo) , ˜ct = tanh(Wcxt + Ucht−1 + bc) , ct = ft ⊙ct−1 + it ⊙˜ct , ht = ot ⊙tanh(ct) , where σ(·) denotes the logistic sigmoid function, and ⊙represents the element-wise matrix multiplication operator. W{i,f,o,c} are encoding weights, and U{i,f,o,c} are recurrent weights, as shown in Fig. 1. b{i,f,o,c} are bias terms. Variants Similar to the LSTM unit, the GRU also has gating units that modulate the flow of information inside the hidden unit. It has been shown that a GRU can achieve similar performance to an LSTM in sequence modeling (Chung et al., 2014). We specify the GRU in the Supplementary Material. The LSTM can be extended to the bidirectional LSTM and multilayer LSTM. A bidirectional LSTM consists of two LSTMs that are run in parallel: one on the input sequence and the other on the reverse of the input sequence. At each time step, the hidden state of the bidirectional LSTM is the concatenation of the forward and backward hidden states. In multilayer LSTMs, the hidden state of an LSTM unit in layer ℓis used as input to the LSTM unit in layer ℓ+ 1 at the same time step (Graves, 2013). 3.3 Applications The proposed Bayesian framework can be applied to any RNN model; we focus on the following tasks to demonstrate the ideas. Language Modeling In word-level language modeling, the input to the network is a sequence of words, and the network is trained to predict the next word in the sequence with a softmax classifier. Specifically, for a length-T sequence, denote yt = xt+1 for t = 1, . . . , T −1. x1 and yT are always set to a special START and END token, respectively. At each time t, there is a decoding function p(yt|ht) = softmax(Vht) to compute the distribution over words, where V are the decoding weights (the number of rows of V corresponds to the number of words/characters). We also extend this basic language model to consider other applications: (i) a character-level language model can be specified in a similar manner by replacing words with characters (Karpathy et al., 2016). (ii) Image captioning can be considered as a conditional language modeling problem, in which we learn a generative language model of the caption conditioned on an image (Vinyals et al., 2015; Gan et al., 2017). Sentence Classification Sentence classification aims to assign a semantic category label y to a whole sentence X. This is usually implemented through applying the decoding function once at the end of sequence: p(y|hT ) = softmax(VhT ), where the final hidden state of a RNN hT is often considered as the summary of the sentence (here 323 the number of rows of V corresponds to the number of classes). 4 Scalable Learning with SG-MCMC 4.1 The Pitfall of Stochastic Optimization Typically there is no closed-form solution for the posterior p(θ|D), and traditional Markov Chain Monte Carlo (MCMC) methods (Neal, 1995) scale poorly for large N. To ease the computational burden, stochastic optimization is often employed to find the MAP solution. This is equivalent to minimizing an objective of regularized loss function U(θ) that corresponds to a (non-convex) model of interest: θMAP = arg min U(θ), U(θ) = −log p(θ|D). The expectation in (1) is approximated as: p( ˜Y| ˜X, D)= p( ˜Y| ˜X, θMAP) . (2) Though simple and effective, this procedure largely loses the benefit of the Bayesian approach, because the uncertainty on weights is ignored. To more accurately approximate (1), we employ stochastic gradient (SG) MCMC (Welling and Teh, 2011). 4.2 Large-scale Bayesian Learning The negative log-posterior is U(θ) ≜−log p(θ) − N X n=1 log p(Dn|θ). (3) In optimization, E = −PN n=1 log p(Dn|θ) is typically referred to as the loss function, and R ∝ −log p(θ) as a regularizer. For large N, stochastic approximations are often employed: ˜Ut(θ)≜−log p(θ) −N M M X m=1 log p(Dim|θ), (4) where Sm = {i1, · · · , iM} is a random subset of the set {1, 2, · · · , N}, with M ≪N. The gradient on this mini-batch is denoted as ˜ft = ∇˜Ut(θ), which is an unbiased estimate of the true gradient. The evaluation of (4) is cheap even when N is large, allowing one to efficiently collect a sufficient number of samples in large-scale Bayesian learning, {θs}S s=1, where S is the number of samples (this will be specified later). These samples are used to construct a sample-based estimation to the expectation in (1): Table 1: SG-MCMC algorithms and their optimization counterparts. Algorithms in the same row share similar characteristics. Algorithms SG-MCMC Optimization Basic SGLD SGD Precondition pSGLD RMSprop/Adagrad Momentum SGHMC momentum SGD Thermostat SGNHT Santa p( ˜Y| ˜X, D)≈1 S S X s=1 p( ˜Y| ˜X, θs) . (5) The finite-time estimation errors of SG-MCMC methods are bounded (Chen et al., 2015a), which guarantees (5) is an unbiased estimate of (1) asymptotically under appropriate decreasing stepsizes. 4.3 SG-MCMC Algorithms SG-MCMC and stochastic optimization are parallel lines of work, designed for different purposes; their relationship has recently been revealed in the context of deep learning. The most basic SG-MCMC algorithm has been applied to Langevin dynamics, and is termed SGLD (Welling and Teh, 2011). To help convergence, a momentum term has been introduced in SGHMC (Chen et al., 2014), a “thermostat” has been devised in SGNHT (Ding et al., 2014; Gan et al., 2015) and preconditioners have been employed in pSGLD (Li et al., 2016a). These SG-MCMC algorithms often share similar characteristics with their counterpart approaches from the optimization literature such as the momentum SGD, Santa (Chen et al., 2016) and RMSprop/Adagrad (Tieleman and Hinton, 2012; Duchi et al., 2011). The interrelationships between SG-MCMC and optimizationbased approaches are summarized in Table 1. SGLD Stochastic Gradient Langevin Dynamics (SGLD) (Welling and Teh, 2011) draws posterior samples, with updates θt = θt−1 −ηt ˜ft−1 + p 2ηtξt , (6) where ηt is the learning rate, and ξt ∼N(0, Ip) is a standard Gaussian random vector. SGLD is the SG-MCMC analog to stochastic gradient descent (SGD), whose parameter updates are given by: θt = θt−1 −ηt ˜ft−1 . (7) 324 Algorithm 1: pSGLD Input: Default hyperparameter settings: ηt = 1×10−3, λ = 10−8, β1 = 0.99. Initialize: v0 ←0, θ1 ∼N(0, I) ; for t = 1, 2, . . . , T do % Estimate gradient from minibatch St ˜ft = ∇˜Ut(θ); % Preconditioning vt ←β1vt−1 + (1 −β1) ˜ft ⊙˜ft; G−1 t ←diag  1 ⊘ λ1 + v 1 2 t  ; % Parameter update ξt ∼N(0, ηtG−1 t ); θt+1 ←θt + ηt 2 G−1 t ˜ft+ ξt; end SGD is guaranteed to converge to a local minimum under mild conditions (Bottou, 2010). The additional Gaussian term in SGLD helps the learning trajectory to explore the parameter space to approximate posterior samples, instead of obtaining a local minimum. pSGLD Preconditioned SGLD (pSGLD) (Li et al., 2016a) was proposed recently to improve the mixing of SGLD. It utilizes magnitudes of recent gradients to construct a diagonal preconditioner to approximate the Fisher information matrix, and thus adjusts to the local geometry of parameter space by equalizing the gradients so that a constant stepsize is adequate for all dimensions. This is important for RNNs, whose parameter space often exhibits pathological curvature and saddle points (Pascanu et al., 2013), resulting in slow mixing. There are multiple choices of preconditioners; similar ideas in optimization include Adagrad (Duchi et al., 2011), Adam (Kingma and Ba, 2015) and RMSprop (Tieleman and Hinton, 2012). An efficient version of pSGLD, adopting RMSprop as the preconditioner G, is summarized in Algorithm 1, where ⊘denotes elementwise matrix division. When the preconditioner is fixed as the identity matrix, the method reduces to SGLD. 4.4 Understanding SG-MCMC To further understand SG-MCMC, we show its close connection to dropout/dropConnect (Srivastava et al., 2014; Wan et al., 2013). These methods improve the generalization ability of deep models, by randomly adding binary/Gaussian noise to the local units or global weights. For neural networks with the nonlinear function q(·) and consecutive layers h1 and h2, dropout and dropConnect are denoted as: Dropout: h2 = ξ0 ⊙q(θh1), DropConnect: h2 = q((ξ0 ⊙θ)h1), where the injected noise ξ0 can be binary-valued with dropping rate p or its equivalent Gaussian form (Wang and Manning, 2013): Binary noise: ξ0 ∼Ber(p), Gaussian noise: ξ0 ∼N(1, p 1 −p). Note that ξ0 is defined as a vector for dropout, and a matrix for dropConnect. By combining dropConnect and Gaussian noise from the above, we have the update rule (Li et al., 2016c): θt+1 = ξ0 ⊙θt −η 2 ˜ft = θt −η 2 ˜ft + ξ′ 0 , (8) where ξ′ 0 ∼N  0, p (1−p)diag(θ2 t )  ; (8) shows that dropout/ dropConnect and SGLD in (6) share the same form of update rule, with the distinction being that the level of injected noise is different. In practice, the noise injected by SGLD may not be enough. A better way that we find to improve the performance is to jointly apply SGLD and dropout. This method can be interpreted as using SGLD to sample the posterior distribution of a mixture of RNNs, with mixture probability controlled by the dropout rate. 5 Experiments We present results on several tasks, including character/word-level language modeling, image captioning, and sentence classification. We do not perform any dataset-specific tuning other than early stopping on validation sets. When dropout is utilized, the dropout rate is set to 0.5. All experiments are implemented in Theano (Theano Development Team, 2016), using a NVIDIA GeForce GTX TITAN X GPU with 12GB memory. The hyper-parameters for the proposed algorithm include step size, minibatch size, thinning interval, number of burn-in epochs and variance of the Gaussian priors. We list the specific values used in our experiments in Table 2. The explanation of these hyperparameters, the initialization of model parameters and model specifications on each dataset are provided in the Supplementary Material. 325 Table 2: Hyper-parameter settings of pSGLD for different datasets. For PTB, SGLD is used. Datasets WP PTB Flickr8k Flickr30k MR CR SUBJ MPQA TREC Minibatch Size 100 32 64 64 50 50 50 50 50 Step Size 2×10−3 1 10−3 10−3 10−3 10−3 10−3 10−3 10−3 # Total Epoch 20 40 20 20 20 20 20 20 20 Burn-in (#Epoch) 4 4 3 3 1 1 1 1 1 Thinning Interval (#Epoch) 1/2 1/2 1 1/2 1 1 1 1 1 # Samples Collected 32 72 17 34 19 19 19 19 19 5.1 Language Modeling We first test character-level and word-level language modeling. The setup is as follows. • Following Karpathy et al. (2016), we test character-level language modeling on the War and Peace (WP) novel. The training/validation/test sets contain 260/32/33 batches, in which there are 100 characters. The vocabulary size is 87, and we consider a 2-hidden-layer RNN of dimension 128. • The Penn Treebank (PTB) corpus (Marcus et al., 1993) is used for word-level language modeling. The dataset adopts the standard split (929K training words, 73K validation words, and 82K test words) and has a vocabulary of size 10K. We train LSTMs of three sizes; these are denoted the small/medium/large LSTM. All LSTMs have two layers and are unrolled for 20 steps. The small, medium and large LSTM has 200, 650 and 1500 units per layer, respectively. We consider two types of training schemes on PTB corpus: (i) Successive minibatches: Following Zaremba et al. (2014), the final hidden states of the current minibatch are used as the initial hidden states of the subsequent minibatch (successive minibatches sequentially traverse the training set). (ii) Random minibatches: The initial hidden states of each minibatch are set to zero vectors, hence we can randomly sample minibatches in each update. We study the effects of different types of architecture (LSTM/GRU/Vanilla RNN (Karpathy et al., 2016)) on the WP dataset, and effects of different learning algorithms on the PTB dataset. The comparison of test cross-entropy loss on WP is shown in Table 3. We observe that pSGLD consistently outperforms RMSprop. Table 4 summarizes the test set performance on PTB1. It is clear 1The results reported here do not match Zaremba et al. (2014) due to the implementation details. However, we proTable 3: Test cross-entropy loss on WP dataset. Methods LSTM GRU RNN RMSprop 1.3607 1.2759 1.4239 pSGLD 1.3375 1.2561 1.4093 10 20 30 40 50 60 Individual Sample 110 120 130 140 150 160 170 180 Perplexity 0 10 20 30 40 50 60 Number of Samples for Model Averaging 110 120 130 140 150 160 170 180 Perplexity forward collection backward collection thinned collection (a) Single sample (b) Different collections Figure 2: Effects of collected samples. that our sampling-based method consistently outperforms the optimization counterpart, where the performance gain mainly comes from adding gradient noise and model averaging. When compared with dropout, SGLD performs better on the small LSTM model, but worse on the medium and large LSTM model. This may imply that dropout is suitable to regularizing large networks, while SGLD exhibits better regularization ability on small networks, partially due to the fact that dropout may inject a higher level of noise during training than SGLD. In order to inject a higher level of noise into SGLD, we empirically apply SGLD and dropout jointly, and found that this provided the best performace on the medium and large LSTM model. We study three strategies to do model averaging, i.e., forward collection, backward collection and thinned collection. Given samples (θ1, · · · , θK) and the number of samples S used for averaging, forward collection refers to using (θ1, · · · , θS) for the evaluation of a test function, backward collection refers to using (θK−S+1, · · · , θK), while thinned collection chooses samples from θ1 to θK with interval K/S. Fig. 2 plots the effects of these strategies, where Fig. 2(a) plots the perplexity of every single sample, Fig. 2(b) plots the perplexities using the three schemes. Only after 20 vide a fair comparison to all methods. 326 Table 4: Test perplexity on Penn Treebank. Methods Small Medium Large Random minibatches SGD 123.85 126.31 130.25 SGD+Dropout 136.39 100.12 97.65 SGLD 117.36 109.14 105.86 SGLD+Dropout 139.54 99.58 94.03 Successive minibatches SGD 113.45 123.14 127.68 SGD+Dropout 117.85 84.60 80.85 SGLD 108.61 121.16 131.40 SGLD+Dropout 125.44 82.71 78.91 Literature Moon et al. (2015) − 97.0 118.7 Moon et al. (2015)+ emb. dropout − 86.5 86.0 Zaremba et al. (2014) − 82.7 78.4 Gal and Ghahramani (2016b) − 78.6 73.4 samples is a converged perplexity achieved in the thinned collection, while it requires 30 samples for forward collection or 60 samples for backward collection. This is unsurprising, because thinned collection provides a better way to select samples. Nevertheless, averaging of samples provides significantly lower perplexity than using single samples. Note that the overfitting problem in Fig. 2(a) is also alleviated by model averaging. To better illustrate the benefit of model averaging, we visualize in Fig. 3 the probabilities of each word in a randomly chosen test sentence. The first 3 rows are the results predicted by 3 distinctive model samples, respectively; the bottom row is the result after averaging. Their corresponding perplexities for the test sentence are also shown on the right of each row. The 3 individual samples provide reasonable probabilities. For example, the consecutive words “New York”, “stock exchange” and “did not” are assigned with a higher probability. After averaging, we can see a much lower perplexity, as the samples can complement each other. For example, though the second sample can yield the lowest single-model perplexity, its prediction on word “York” is still benefited from the other two via averaging. 5.2 Image Caption Generation We next consider the problem of image caption generation, which is a conditional RNN model, where image features are extracted by residual network (He et al., 2016), and then fed into the RNN to generate the caption. We present results on two benchmark datasets, Flickr8k (Hodosh et al., 2013) and Flickr30k (Young et al., 2014). These 25.55 the 25.55 new 25.55 york 25.55 stock 25.55 exchange 25.55 did 25.55 not 25.55 fall 25.55 apart 22.24 the 22.24 new 22.24 york 22.24 stock 22.24 exchange 22.24 did 22.24 not 22.24 fall 22.24 apart 29.83 the 29.83 new 29.83 york 29.83 stock 29.83 exchange 29.83 did 29.83 not 29.83 fall 29.83 apart 21.98 the 21.98 new 21.98 york 21.98 stock 21.98 exchange 21.98 did 21.98 not 21.98 fall 21.98 apart 0 0.2 0.4 0.6 0.8 1 Figure 3: Predictive probabilities obtained by 3 samples and their average. Colors indicate normalized probability of each word. Best viewed in color. a"tan"dog"is"playing"in"the"grass a"tan"dog"is"playing"with"a"red"ball"in"the"grass a"tan"dog"with"a"red"collar"is"running"in"the"grass a"yellow"dog"runs"through"the"grass a"yellow"dog"is"running"through"the"grass a"brown"dog"is"running"through"the"grass a"group"of"people"stand"in"front"of"a"building a"group"of"people"stand"in"front"of"a"white"building a"group"of"people"stand"in"front"of"a"large"building a"man"and"a"woman"walking"on"a"sidewalk a"man"and"a"woman"stand"on"a"balcony a"man"and"a"woman"standing"on"the"ground Figure 4: Image captioning with different samples. Left are the given images, right are the corresponding captions. The captions in each box are from the same model sample. datasets contain 8,000 and 31,000 images, respectively. Each image is annotated with 5 sentences. A single-layer LSTM is employed with the number of hidden units set to 512. The widely used BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGEL (Lin, 2004), and CIDEr-D (Vedantam et al., 2015) metrics are used to evaluate the performance. All the metrics are computed by using the code released by the COCO evaluation server (Chen et al., 2015b). Table 5 presents results for pSGLD/RMSprop 327 Table 5: Performance on Flickr8k & Flickr30k: BLEU’s, METEOR, CIDEr, ROUGE-L and perplexity. Methods B-1 B-2 B-3 B-4 METEOR CIDEr ROUGE-L Perp. Results on Flickr8k RMSprop 0.640 0.427 0.288 0.197 0.205 0.476 0.500 16.64 RMSprop + Dropout 0.647 0.444 0.305 0.209 0.208 0.514 0.510 15.72 RMSprop + Gal’s Dropout 0.651 0.443 0.305 0.209 0.206 0.501 0.509 14.70 pSGLD 0.669 0.463 0.321 0.224 0.214 0.535 0.522 14.29 pSGLD + Dropout 0.656 0.450 0.309 0.211 0.209 0.512 0.512 14.26 Results on Flickr30k RMSprop 0.644 0.422 0.279 0.184 0.180 0.372 0.476 17.80 RMSprop + Dropout 0.656 0.435 0.295 0.200 0.185 0.396 0.481 18.05 RMSprop + Gal’s Dropout 0.636 0.429 0.290 0.197 0.190 0.408 0.480 17.27 pSGLD 0.657 0.438 0.300 0.206 0.192 0.421 0.490 15.61 pSGLD + Dropout 0.666 0.448 0.308 0.209 0.189 0.419 0.487 17.05 with or without dropout. In addition to (naive) dropout, we further compare pSGLD with the Gal’s dropout, recently proposed in Gal and Ghahramani (2016b), which is shown to be applicable to recurrent layers. Consistent with the results in the basic language modeling, pSGLD yields improved performance compared to RMSprop. For example, pSGLD provides 2.7 BLEU-4 score improvement over RMSprop on the Flickr8k dataset. By comparing pSGLD with RMSprop with dropout, we conclude that pSGLD exhibits better regularization ability than dropout on these two datasets. Apart from modeling weight uncertainty, different samples from our algorithm may capture different aspects of the input image. An example with two images is shown in Fig. 4, where 2 randomly chosen model samples are considered for each image. For each model sample, the top 3 generated captions are presented. We use the beam search approach (Vinyals et al., 2015) to generate captions, with a beam of size 5. In Fig. 4, the two samples for the first image mainly differ in the color and activity of the dog, e.g., “tan” or “yellow”, “playing” or “running”, whereas for the second image, the two samples reflect different understanding of the image content. 5.3 Sentence Classification We study the task of sentence classification on 5 datasets as in Kiros et al. (2015): MR (Pang and Lee, 2005), CR (Hu and Liu, 2004), SUBJ (Pang and Lee, 2004), MPQA (Wiebe et al., 2005) and TREC (Li and Roth, 2002). A single-layer bidirectional LSTM is employed with the number of hidden units set to 400. Table 6 shows the test5 10 15 #Epoch 0.00 0.05 0.10 0.15 0.20 0.25 Error Train RMSprop RMSprop + Dropout pSGLD pSGLD + Dropout 5 10 15 #Epoch 0.10 0.12 0.14 0.16 0.18 0.20 0.22 0.24 0.26 Error Validation 5 10 15 #Epoch 0.10 0.15 0.20 Error Test Figure 5: Learning curves on TREC dataset. ing classification errors. 10-fold cross-validation is used for evaluation on the first 4 datasets, while TREC has a pre-defined training/test split, and we run each algorithm 10 times on TREC. The combination of pSGLD and dropout consistently provides the lowest errors. In the following, we focus on the analysis of TREC. Each sentence of TREC is a question, and the goal is to decide which topic type the question is most related to: location, human, numeric, abbreviation, entity or description. Fig. 5 plots the learning curves of different algorithms on the training, validation and testing sets of the TREC dataset. pSGLD and dropout have similar behavior: they explore the parameter space during learning, and thus coverge slower than RMSprop on the training dataset. However, the learned uncertainty alleviates overfitting and results in lower errors on the validation and testing datasets. To further study the Bayesian nature of the proposed approach, in Fig. 6 we choose two testing sentences with high uncertainty (i.e., standard derivation in prediction) from the TREC dataset. Interestingly, after embedding to 2d-space with tSNE (Van der Maaten and Hinton, 2008), the two 328 Table 6: Sentence classification errors on five benchmark datasets. Methods MR CR SUBJ MPQA TREC RMSprop 21.86±1.19 20.20±1.35 8.13±1.19 10.60±1.28 8.14±0.63 RMSprop + Dropout 20.52±0.99 19.57±1.79 7.24±0.86 10.66±0.74 7.48±0.47 RMSprop + Gal’s Dropout 20.22±1.12 19.29±1.93 7.52±1.17 10.59±1.12 7.34±0.66 pSGLD 20.36±0.85 18.72±1.28 7.00±0.89 10.54±0.99 7.48±0.82 pSGLD + Dropout 19.33±1.10 18.18±1.32 6.61±1.06 10.22±0.89 6.88±0.65 Whatdoes ccin engines mean? Whatdoes adefibrillatordo? True5Type Predicted5 Type Description Description Testing5Question Entity Abbreviation Figure 6: Visualization. Top two rows show selected ambiguous sentences, which correspond to the points with black circles in tSNE visualization of the testing dataset. sentences correspond to points lying on the boundary of different classes. We use 20 model samples to estimate the prediction mean and standard derivation on the true type and predicted type. The classifier yields higher probability on the wrong types, associated with higher standard derivations. One can leverage the uncertainty information to make decisions: either manually make a human judgement when uncertainty is high, or automatically choose the one with lower standard derivations when both types exhibits similar prediction means. A more rigorous usage of the uncertainty information is left as future work. 5.4 Discussion Ablation Study We investigate the effectivenss of each module in the proposed algorithm in Table 7 on two datasets: TREC and PTB. The small network size is used on PTB. Let M1 denote only gradient noise, and M2 denote only model averaging. As can be seen, The last sample in pSGLD (M1) does not necessarily bring better results than RMSprop, but the model averaging over the samples of pSGLD indeed provide better results than model averaging of RMSprop (M2). This indicates that both gradient noise and model averaging are crucial for good performance in pSGLD. Table 7: Ablation study on TREC and PTB. Datasets RMSprop M1 M2 pSGLD TREC 8.14 8.34 7.54 7.48 PTB 120.45 122.14 114.86 109.44 Table 8: Running time on Flickr30k in seconds. Stages pSGLD RMSprop+Dropout Training 20324 12578 Testing 7047 1311 Running Time We report the training and testing time for image captioning on the Flickr30k dataset in Table 8. For pSGLD, the extra cost in training comes from adding gradient noise, and the extra cost in testing comes from model averaging. However, the cost in model averaging can be alleviated via the distillation methods: learning a single neural network that approximates the results of either a large model or an ensemble of models (Korattikara et al., 2015; Kim and Rush, 2016; Kuncoro et al., 2016). The idea can be incorporated with our SG-MCMC technique to achieve the same goal, which we leave for our future work. 6 Conclusion We propose a scalable Bayesian learning framework using SG-MCMC, to model weight uncertainty in recurrent neural networks. The learning framework is tested on several tasks, including language models, image caption generation and sentence classification. Our algorithm outperforms stochastic optimization algorithms, indicating the importance of learning weight uncertainty in recurrent neural networks. Our algorithm requires little additional computational overhead in training, and multiple times of forward-passing for model averaging in testing. Acknowledgments This research was supported by ARO, DARPA, DOE, NGA, ONR and NSF. We acknowledge Wenlin Wang for the code on language modeling experiment. 329 References S. Banerjee and A. Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In ACL workshop. J. Bayer, C. Osendorfer, D. Korhammer, N. Chen, S. Urban, and P. van der Smagt. 2013. On fast dropout and its applicability to recurrent networks. arXiv:1311.0701 . T. Bluche, C. Kermorvant, and J. Louradour. 2015. Where to apply dropout in recurrent neural networks for handwriting recognition? In ICDAR. C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. 2015. Weight uncertainty in neural networks. In ICML. L Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In COMPSTAT. C. Chen, D. Carlson, Z. Gan, C. Li, and L. Carin. 2016. Bridging the gap between stochastic gradient MCMC and stochastic optimization. In AISTATS. C. Chen, N. Ding, and L. Carin. 2015a. On the convergence of stochastic gradient MCMC algorithms with high-order integrators. In NIPS. T. Chen, E. B. Fox, and C. Guestrin. 2014. Stochastic gradient Hamiltonian Monte Carlo. In ICML. X. Chen, H. Fang, T. Lin, R. Vedantam, S. Gupta, P. Doll´ar, and C. L. Zitnick. 2015b. Microsoft coco captions: Data collection and evaluation server. arXiv:1504.00325 . K. Cho, B. Van Merri¨enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP. J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555 . N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel, and H. Neven. 2014. Bayesian sampling using stochastic gradient thermostats. In NIPS. J. Duchi, E. Hazan, and Y. Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR . Y. Gal and Z. Ghahramani. 2016a. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In ICML. Y. Gal and Z. Ghahramani. 2016b. A theoretically grounded application of dropout in recurrent neural networks. In NIPS. Z. Gan, C. Chen, R. Henao, D. Carlson, and L. Carin. 2015. Scalable deep poisson factor analysis for topic modeling. In ICML. Z. Gan, C. Gan, X. He, Y. Pu, K. Tran, J. Gao, L. Carin, and L. Deng. 2017. Semantic compositional networks for visual captioning. In CVPR. A. Graves. 2011. Practical variational inference for neural networks. In NIPS. A. Graves. 2013. Generating sequences with recurrent neural networks. arXiv:1308.0850 . K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep residual learning for image recognition. In CVPR. J. M. Hern´andez-Lobato and R. P. Adams. 2015. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In ICML. G. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580 . S. Hochreiter and J. Schmidhuber. 1997. Long shortterm memory. In Neural computation. M. Hodosh, P. Young, and J. Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. JAIR . M. Hu and B. Liu. 2004. Mining and summarizing customer reviews. SIGKDD . A. Karpathy, J. Johnson, and L. Fei-Fei. 2016. Visualizing and understanding recurrent networks. In ICLR Workshop. Y. Kim and A. M. Rush. 2016. Sequence-level knowledge distillation. In EMNLP. D. Kingma and J. Ba. 2015. Adam: A method for stochastic optimization. In ICLR. D. Kingma, T. Salimans, and M. Welling. 2015. Variational dropout and the local reparameterization trick. In NIPS. R. Kiros, Y. Zhu, R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler. 2015. Skip-thought vectors. In NIPS. A. Korattikara, V. Rathod, K. Murphy, and M. Welling. 2015. Bayesian dark knowledge. In NIPS. A. Kuncoro, M. Ballesteros, L. Kong, C. Dyer, and N. A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one mst parser. In EMNLP. C. Li, C. Chen, D. Carlson, and L. Carin. 2016a. Preconditioned stochastic gradient Langevin dynamics for deep neural networks. In AAAI. C. Li, C. Chen, K. Fan, and L. Carin. 2016b. Highorder stochastic gradient thermostats for Bayesian learning of deep models. In AAAI. C. Li, A. Stevens, C. Chen, Y. Pu, Z. Gan, and L. Carin. 2016c. Learning weight uncertainty with stochastic gradient mcmc for shape classification. In CVPR. 330 X. Li and D. Roth. 2002. Learning question classifiers. ACL . C. Lin. 2004. Rouge: A package for automatic evaluation of summaries. In ACL workshop. D. J. C. MacKay. 1992. A practical Bayesian framework for backpropagation networks. In Neural computation. M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics . T. Mikolov, M. Karafi´at, L. Burget, J. Cernock`y, and S. Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. T. Moon, H. Choi, H. Lee, and I. Song. 2015. Rnndrop: A novel dropout for rnns in asr. ASRU . R. M. Neal. 1995. Bayesian learning for neural networks. PhD thesis, University of Toronto. A. Neelakantan, L. Vilnis, Q. Le, I. Sutskever, L. Kaiser, K. Kurach, and J. Martens. 2016. Adding gradient noise improves learning for very deep networks. In ICLR workshop. M. Pachitariu and M. Sahani. 2013. Regularization and nonlinearities for neural language models: when are they needed? arXiv:1301.5650 . B. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. ACL . B. Pang and L. Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. ACL . K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. R. Pascanu, T. Mikolov, and Y. Bengio. 2013. On the difficulty of training recurrent neural networks. In ICML. V. Pham, T. Bluche, C. Kermorvant, and J. Louradour. 2014. Dropout improves recurrent neural networks for handwriting recognition. In ICFHR. H. Robbins and S. Monro. 1951. A stochastic approximation method. In The annals of mathematical statistics. S. Semeniuta, A. Severyn, and E. Barth. 2016. Recurrent dropout without memory loss. arXiv:1603.05118 . N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR . I. Sutskever, J. Martens, and G. E. Hinton. 2011. Generating text with recurrent neural networks. In ICML. Y. W. Teh, A. H. Thi´ery, and S. J. Vollmer. 2016. Consistency and fluctuations for stochastic gradient Langevin dynamics. JMLR . Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv:1605.02688 . T. Tieleman and G. Hinton. 2012. Lecture 6.5rmsprop: Divide the gradient by a running average of its recent magnitude. Coursera: Neural Networks for Machine Learning . L. Van der Maaten and G. E. Hinton. 2008. Visualizing data using t-SNE. JMLR . R. Vedantam, C. L. Zitnick, and D. Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR. O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. 2015. Show and tell: A neural image caption generator. In CVPR. L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. 2013. Regularization of neural networks using DropConnect. In ICML. S. Wang and C. Manning. 2013. Fast Dropout training. In ICML. M. Welling and Y. W. Teh. 2011. Bayesian learning via stochastic gradient Langevin dynamics. In ICML. P. Werbos. 1990. Backpropagation through time: what it does and how to do it. In Proceedings of the IEEE. J. Wiebe, T. Wilson, and C. Cardie. 2005. Annotating expressions of opinions and emotions in language. Language resources and evaluation . P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL . W. Zaremba, I. Sutskever, and O. Vinyals. 2014. Recurrent neural network regularization. arXiv:1409.2329 . 331
2017
30
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 332–344 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1031 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 332–344 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1031 Learning attention for historical text normalization by learning to pronounce Marcel Bollmann Department of Linguistics Ruhr-Universität Bochum Germany [email protected] Joachim Bingel Dept. of Computer Science University of Copenhagen Denmark [email protected] Anders Søgaard Dept. of Computer Science University of Copenhagen Denmark [email protected] Abstract Automated processing of historical texts often relies on pre-normalization to modern word forms. Training encoder-decoder architectures to solve such problems typically requires a lot of training data, which is not available for the named task. We address this problem by using several novel encoder-decoder architectures, including a multi-task learning (MTL) architecture using a grapheme-to-phoneme dictionary as auxiliary data, pushing the state-of-theart by an absolute 2% increase in performance. We analyze the induced models across 44 different texts from Early New High German. Interestingly, we observe that, as previously conjectured, multi-task learning can learn to focus attention during decoding, in ways remarkably similar to recently proposed attention mechanisms. This, we believe, is an important step toward understanding how MTL works. 1 Introduction There is a growing interest in automated processing of historical documents, as evidenced by the growing field of digital humanities and the increasing number of digitally available collections of historical documents. A common approach to deal with the high amount of variance often found in this type of data is to perform spelling normalization (Piotrowski, 2012), which is the mapping of historical spelling variants to standardized/modernized forms (e.g. vnd →und ‘and’). Training data for supervised learning of historical text normalization is typically scarce, making it a challenging task for neural architectures, which typically require large amounts of labeled data. Nevertheless, we explore framing the spelling normalization task as a character-based sequence-to-sequence transduction problem, and use encoder–decoder recurrent neural networks (RNNs) to induce our transduction models. This is similar to models that have been proposed for neural machine translation (e.g., Cho et al. (2014)), so essentially, our approach could also be considered a specific case of character-based neural machine translation. By basing our model on individual characters as input, we keep the vocabulary size small, which in turn reduces the model’s complexity and the amount of data required to train it effectively. Using an encoder–decoder architecture removes the need for an explicit character alignment between historical and modern wordforms. Furthermore, we explore using an auxiliary task for which data is more readily available, namely grapheme-tophoneme mapping (word pronunciation), to regularize the induction of the normalization models. We propose several architectures, including multi-task learning architectures taking advantage of the auxiliary data, and evaluate them across 44 small datasets from Early New High German. Contributions Our contributions are as follows: • We are, to the best of our knowledge, the first to propose and evaluate encoder-decoder architectures for historical text normalization. • We evaluate several such architectures across 44 datasets of Early New High German. • We show that such architectures benefit from bidirectional encoding, beam search, and attention. • We also show that MTL with pronunciation as an auxiliary task improves the performance of architectures without attention. 332 • We analyze the above architectures and show that the MTL architecture learns attention from the auxiliary task, making the attention mechanism largely redundant. • We make our implementation publicly available at https://bitbucket.org/ mbollmann/acl2017. In sum, we both push the state-of-the-art in historical text normalization and present an analysis that, we believe, brings us a step further in understanding the benefits of multi-task learning. 2 Datasets Normalization For the normalization task, we use a total of 44 texts from the Anselm corpus (Dipper and Schultz-Balluff, 2013) of Early New High German.1 The corpus is a collection of manuscripts and prints of the same core text, a religious treatise. Although the texts are semi-parallel and share some vocabulary, they were written in different time periods (between the 14th and 16th century) as well as different dialectal regions, and show quite diverse spelling characteristics. For example, the modern German word Frau ‘woman’ can be spelled as fraw/vraw (Me), frawe (N2), frauwe (St), fraüwe (B2), frow (Stu), vrowe (Ka), vorwe (Sa), or vrouwe (B), among others.2 All texts in the Anselm corpus are manually annotated with gold-standard normalizations following guidelines described in Krasselt et al. (2015). For our experiments, we excluded texts from the corpus that are shorter than 4,000 tokens, as well as a few for which annotations were not yet available at the time of writing (mostly Low German and Dutch versions). Nonetheless, the remaining 44 texts are still quite short for machine-learning standards, ranging from about 4,200 to 13,200 tokens, with an average length of 7,350 tokens. For all texts, we removed tokens that consisted solely of punctuation characters. We also lowercase all characters, since it helps keep the size of the vocabulary low, and uppercasing of words is usually not very consistent in historical texts. Tokenization was not an issue for pre-processing these texts, since modern token boundaries have already been marked by the transcribers. 1https://www.linguistics.rub.de/ anselm/ 2We refer to individual texts using the same internal IDs that are found in the Anselm corpus (cf. the website). Grapheme-to-phoneme mappings We use learning to pronounce as our auxiliary task. This task consists of learning mappings from sequences of graphemes to the corresponding sequences of phonemes. We use the German part of the CELEX lexical database (Baayen et al., 1995), particularly the database of phonetic transcriptions of German wordforms. The database contains a total of 365,530 wordforms with transcriptions in DISC format, which assigns one character to each distinct phonological segment (including affricates and diphthongs). For example, the word Jungfrau ‘virgin’ is represented as ’jUN-frB. 3 Model 3.1 Base model We propose several architectures that are extensions of a base neural network architecture, closely following the sequence-to-sequence model proposed by Sutskever et al. (2014). It consists of the following: • an embedding layer that maps one-hot input vectors to dense vectors; • an encoder RNN that transforms the input sequence to an intermediate vector of fixed dimensionality; • a decoder RNN whose hidden state is initialized with the intermediate vector, and which is fed the output prediction of one timestep as the input for the next one; and • a final dense layer with a softmax activation which takes the decoder’s output and generates a probability distribution over the output classes at each timestep. For the encoder/decoder RNNs, we use long short-term memory units (LSTM) (Hochreiter and Schmidhuber, 1997). LSTMs are designed to allow recurrent networks to better learn long-term dependencies, and have proven advantageous to standard RNNs on many tasks. We found no significant advantage from stacking multiple LSTM layers for our task, so we use the simplest competitive model with only a single LSTM unit for both encoder and decoder. By using this encoder–decoder model, we avoid the need to generate explicit alignments between the input and output sequences, which would bring up the question of how to deal with input/output 333 v r o w e (START) f r a u f r a u (END) Figure 1: Flow diagram of the base model; left side is the encoder, right side the decoder, the latter of which has an additional prediction layer on top. Multi-task learning variants use two separate prediction layers for main/auxiliary tasks, while sharing the rest of the model. Embedding layers for the inputs are not explicitly shown. pairs of different lengths. Another important property is that the model does not start to generate any output until it has seen the full input sequence, which in theory allows it to learn from any part of the input, without being restricted to fixed context windows. An example illustration of the unrolled network is shown in Fig. 1. 3.2 Training During training, the encoder inputs are the historical wordforms, while the decoder inputs correspond to the correct modern target wordforms. We then train each model by minimizing the crossentropy loss across all output characters; i.e., if y = (y1, ..., yn) is the correct output word (as a list of one-hot vectors of output characters) and ˆy = (ˆy1, ..., ˆyn) is the model’s output, we minimize the mean loss −Pn i=1 yi log ˆyi over all training samples. For the optimization, we use the Adam algorithm (Kingma and Ba, 2015) with a learning rate of 0.003. To reduce computational complexity, we also set a maximum word length of 14, and filter all training samples where either the input or output word is longer than 14 characters. This only affects 172 samples across the whole dataset, and is only done during training. In other words, we evaluate our models across all the test examples. 3.3 Decoding For prediction, our base model generates output character sequences in a greedy fashion, selecting the character with the highest probability at each timestep. This works fairly well, but the greedy approach can yield suboptimal global picks, in which each individual character is sensibly derived from the input, but the overall word is nonsensical. We therefore also experiment with beam search decoding, setting the beam size to 5. Finally, we also experiment with using a lexical filter during the decoding step. Here, before picking the next 5 most likely characters during beam search, we remove all characters that would lead to a string not covered by the lexicon. This is again intended to reduce the occurrence of nonsensical outputs. For the lexicon, we use all word forms from CELEX (cf. Sec. 2) plus the target word forms from the training set.3 3.4 Attention In our base architecture, we assume that we can decode from a single vector encoding of the input sequence. This is a strong assumption, especially with long input sequences. Attention mechanisms give us more flexibility. The idea is that instead of encoding the entire input sequence into a fixedlength vector, we allow the decoder to “attend” to different parts of the input character sequence at each time step of the output generation. Importantly, we let the model learn what to attend to based on the input sequence and what it has produced so far. Our implementation is identical to the decoder with soft attention described by Xu et al. (2015). If a = (a1, ..., an) is the encoder’s output and ht is the decoder’s hidden state at timestep t, we first calculate a context vector ˆzt as a weighted combination of the output vectors ai: ˆzt = n X i=1 αiai (1) 3We observe that due to this filtering, we cannot reach 2.25% of the targets in our test set, most of which are Latin word forms. 334 The weights αi are derived by feeding the encoder’s output and the decoder’s hidden state from the previous timestep into a multilayer perceptron, called the attention model (fatt): α = softmax(fatt(a, ht−1)) (2) We then modify the decoder by conditioning its internal states not only on the previous hidden state ht−1 and the previously predicted output character yt−1, but also on the context vector ˆzt: it = σ(Wi[ht−1, yt−1, ˆzt] + bi) ft = σ(Wf[ht−1, yt−1, ˆzt] + bf) ot = σ(Wo[ht−1, yt−1, ˆzt] + bo) gt = tanh(Wg[ht−1, yt−1, ˆzt] + bg) ct = ft ⊙ct−1 + it ⊙gt ht = ot ⊙tanh(ct) (3) In Eq. 3, we follow the traditional LSTM description consisting of input gate it, forget gate ft, output gate ot, cell state ct and hidden state ht, where W and b are trainable parameters. For all experiments including an attentional decoder, we use a bi-directional encoder, comprised of one LSTM layer that reads the input sequence normally and another LSTM layer that reads it backwards, and attend over the concatenated outputs of these two layers. While a precise alignment of input and output sequences is sometimes difficult, most of the time the sequences align in a sequential order, which can be exploited by an attentional component. 3.5 Multi-task learning Finally, we introduce a variant of the base architecture, with or without beam search, that does multi-task learning (Caruana, 1993). The multitask architecture only differs from the base architecture in having two classifier functions at the outer layer, one for each of our two tasks. Our auxiliary task is to predict a sequence of phonemes as the correct pronunciation of an input sequence of graphemes. This choice is motivated by the relationship between phonology and orthography, in particular the observation that spelling variation often stems from phonological variation. We train our multi-task learning architecture by alternating between the two tasks, sampling one instance of the auxiliary task for each training sample of the main task. We use the encoderdecoder to generate a corresponding output sequence, whether a modern word form or a pronunciation. Doing so, we suffer a loss with respect to the true output sequence and update the model parameters. The update for a sample from a specific task affects the parameters of corresponding classifier function, as well as all the parameters of the shared hidden layers. 3.6 Hyperparameters We used a single manuscript (B) for manually evaluating and setting the hyperparameters. This manuscript is left out of the averages reported below. We believe that using a single manuscript for development, and using the same hyperparameters across all manuscripts, is more realistic, as we often do not have enough data in historical text normalization to reliably tune hyperparameters. For the final evaluation, we set the size of the embedding and the recurrent LSTM layers to 128, applied a dropout of 0.3 to the input of each recurrent layer, and trained the model on mini-batches with 50 samples each for a total of 50 epochs (in the multi-task learning setup, mini-batches contain 50 samples of each task, and epochs are counted by the size of the training set for the main task only). All these parameters were set on the B manuscript alone. 3.7 Implementation We implemented all of the models in Keras (Chollet, 2015). Any parameters not explicitly described here were left at their default values in Keras v1.0.8. 4 Evaluation We split up each text into three parts, using 1,000 tokens each for a test set and a development set (that is not currently used), and the remainder of the text (between 2,000 and 11,000 tokens) for training. We then train and evaluate on each of the 43 texts (excluding the B text that was used for hyper-parameter tuning) individually. Baselines We compare our architectures to several competitive baselines. Our first baseline is an averaged perceptron model trained to predict output character n-grams for each input character, after using Levenshtein alignment with generated segment distances (Wieling et al., 2009, Sec. 3.3) to align input and output characters. Our second baseline uses the same alignment, but trains a 335 Avg. Accuracy Norma 77.89% Averaged perceptron 75.72% Bi-LSTM tagger 79.91% MTL bi-LSTM tagger 79.56% Base model GREEDY 78.91% BEAM 79.27% BEAM+FILTER 80.46% BEAM+FILTER+ATTENTION 82.72% MTL model GREEDY 80.64% BEAM 81.13% BEAM+FILTER 82.76% BEAM+FILTER+ATTENTION 82.02% Table 1: Average word accuracy across 43 texts from the Anselm dataset, evaluated on the first 1,000 tokens of each text. Evaluation on the base encoder-decoder model (Sec. 3.1) with greedy search, beam search (k = 5) and/or lexical filtering (Sec. 3.3), with attentional decoder (Sec. 3.4), and the multi-task learning (MTL) model using grapheme-to-phoneme mappings (Sec. 3.5). deep bi-LSTM sequential tagger, following Bollmann and Søgaard (2016). We evaluate this tagger using both standard and multi-task learning. Finally, we compare our model to the rule-based and Levenshtein-based algorithms provided by the Norma tool (Bollmann, 2012).4 4.1 Word accuracy We use word-level accuracy as our evaluation metric. While we also measure character-level metrics, minor differences on character level can cause large differences in downstream applications, so we believe that perfectly matching the output sequences is more useful. Average scores across all 43 texts are presented in Table 1 (see Appendix A for individual scores). We first see that almost all our encoder-decoder architectures perform significantly better than the four state-of-the-art baselines. All our architectures perform better than Norma and the averaged perceptron, and all the MTL architectures outperform Bollmann and Søgaard (2016). We also see that beam search, filtering, and attention lead to cumulative gains in the context of the single-task architecture – with the best architecture outperforming the state-of-the-art by almost 3% in absolute terms. For our multi-task architecture, we also observe gains when we add beam search and filtering, but 4https://github.com/comphist/norma importantly, adding attention does not help. In fact, attention hurts the performance of our multitask architecture quite significantly. Also note that the multi-task architecture without attention performs on-par with the single-task architecture with attention. We hypothesize that the reason for this pattern, which is not only observed in the average scores in Table 1, but also quite consistent across the individual results in Appendix A, is that our multi-task learning already learns how to focus attention. This is the hypothesis that we will try to validate in Sec. 5: That multi-task learning can induce strategies for focusing attention comparable to attention strategies for recurrent neural networks. Sample predictions A small selection of predictions from our models is shown in Table 2. They serve to illustrate the effects of the various settings; e.g., the base model with greedy search tends to produce more nonsense words (ters, ünsget) than the others. Using a lexical filter helps the most in this regard: the base model with filtering correctly normalizes ergieng to erging ‘(he) fared’, while decoding without a filter produces the non-word erbiggen. Even for herczenlichen (modern herzlichen ‘heartfelt’), where no model finds the correct target form, only the model with filtering produces a somewhat reasonable alternative (herzgeliebtes ‘heartily loved’). In some cases (such as gewarnet ‘warned’), 336 Input Target Base model MTL model GREEDY BEAM B+F B+F+A B+F ergieng erging erbiggen erbiggen erging erging erging herczenlichen herzlichen herrgelichen herzgelichen herzgeliebtes herzel herzel tewr teuer ters terter terme teurer der iüngst jüngst ünsget pingst fingst fingst jüngst gewarnet gewarnt prandet prandert pranget gewarnt gewarnt dick oft oft oft oft dicke dicke Table 2: Selected predictions from some of our models on the M4 text; B = BEAM, F = FILTER, A = ATTENTION. only the models with attention or multi-task learning produce the correct normalization, but even when they are wrong, they often agree on the prediction (e.g. dicke, herzel). We will investigate this property further in Sec. 5. 4.2 Learned vector representations To gain further insights into our model, we created t-SNE projections (Maaten and Hinton, 2008) of vector representations learned on the M4 text. Fig. 2 shows the learned character embeddings. In the representations from the base model (Fig. 2a), characters that are often normalized to the same target character are indeed grouped closely together: e.g., historical <v> and <u> (and, to a smaller extent, <f>) are often used interchangeably in the M4 text. Note the wide separation of <n> and <m>, which is a feature of M4 that does not hold true for all of the texts, as these do not always display a clear distinction between nasals. On the other hand, the MTL model shows a better generalization of the training data (Fig. 2b): here, <u> is grouped closer to other vowel characters and far away from <v>/<f>. Also, <n> and <m> are now in close proximity. We can also visualize the internal word representations that are produced by the encoder (Fig. 3). Here, we chose words that demonstrate the interchangeable use of <u> and <v>. Historical vnd, vns, vmb become modern und, uns, um, changing the <v> to <u>. However, the representation of vmb learned by the base model is closer to forms like von, vor, uor, all starting with <v> in the target normalization. In the MTL model, however, these examples are indeed clustered together. 5 Analysis: Multi-task learning helps focus attention Table 1 shows that models which employ either an attention mechanism or multi-task learning obtain similar improvements in word accuracy. However, we observe a decline in word accuracy for models that combine multi-task learning with attention. A possible interpretation of this counterintuitive pattern might be that attention and MTL, to some degree, learn similar functions of the input data, a conjecture by Caruana (1998). We put this hypothesis to the test by closely investigating properties of the individual models below. 5.1 Model parameters First, we are interested in the weight parameters of the final layer that transforms the decoder output to class probabilities. We consider these parameters for our standard encoder-decoder model and compare them to the weights that are learned by the attention and multi-task models, respectively.5 Note that hidden layer parameters are not necessarily comparable across models, but with a fixed seed, differences in parameters over a reference model may be (and are, in our case). With a fixed seed, and iterating over data points in the same order, it is conceivable the two non-baselines end up in roughly the same alternative local optimum (or at least take comparable routes). We observe that the weight differences between the standard and the attention model correlate with the differences between the standard and multitask model by a Pearson’s r of 0.346, averaged across datasets, with a standard deviation of 0.315; on individual datasets, correlation coefficient is as 5For the multi-task models, this analysis disregards those dimensions that do not correspond to classes in the main task. 337 (a) Base model (b) Multi-task learning model Figure 2: t-SNE projections (with perplexity 7) of character embeddings from models trained on M4 (a) Base model (b) Multi-task learning model Figure 3: t-SNE projections (with perplexity 5) of the intermediate vectors produced by the encoder (“historical word embeddings”), from models trained on M4 Figure 4: Heat map of parameter differences in the final dense layer between (a) the plain and the attention model as well as (b) the plain and the multi-task model, when trained on the N4 manuscript. The changes correlate by ρ = 0.959. 338 Figure 5: First-derivative saliency w.r.t. the input sequence, as calculated from the base model (left), the attentional model (center), and the MTL model (right). The scores for the attentional and the multi-task model correlate by ρ = 0.615, while the correlation of either one with the base model is |ρ| < 0.12. high as 96. Figure 4 illustrates these highly parallel weight changes for the different models when trained on the N4 dataset. 5.2 Final output Next, we compare the effect that employing either an attention mechanism or multi-task learning has on the actual output of our system. We find that out of the 210.9 word errors that the base model produces on average across all test sets (comprising 1,000 tokens each), attention resolves 47.7, while multi-task learning resolves an average of 45.4 errors. Crucially, the overlap of errors that are resolved by both the attention and the MTL model amounts to 27.7 on average. Attention and multi-task also introduce new errors compared to the base model (26.6 and 29.5 per test set, respectively), and again we can observe a relatively high agreement of the models (11.8 word errors are introduced by both models). Finally, the attention and multi-task models display a word-level agreement of κ=0.834 (Cohen’s kappa), while either of these models is less strongly correlated with the base model (κ=0.817 for attention and κ=0.814 for multi-task learning). 5.3 Saliency analysis Our last analysis regards the saliency of the input timesteps with respect to the predictions of our models. We follow Li et al. (2016) in calculating first-derivative saliency for given input/output pairs and compare the scores from the different models. The higher the saliency of an input timestep, the more important it is in determining the model’s prediction at a given output timestep. Therefore, if two models produce similar saliency matrices for a given input/output pair, they have learned to focus on similar parts of the input during the prediction. Our hypothesis is that the attentional and the multi-task learning model should be more similar in terms of saliency scores than either of them compared to the base model. Figure 5 shows a plot of the saliency matrices generated from the word pair czeychen – zeichen ‘sign’. Here, the scores for the attentional and the MTL model indeed correlate by ρ = 0.615, while those for the base model do not correlate with either of them. A systematic analysis across 19,000 word pairs (where all models agree on the output) shows that this effect only holds for longer input sequences (≥7 characters), with a mean ρ = 0.303 (±0.177) for attentional vs. MTL model, while the base model correlates with either of them by ρ < 0.21. 6 Related Work Many traditional approaches to spelling normalization of historical texts use edit distances or some form of character-level rewrite rules, handcrafted (Baron and Rayson, 2008) or learned automatically (Bollmann, 2013; Porta et al., 2013). A more recent approach is based on characterbased statistical machine translation applied to historical text (Pettersson et al., 2013; SánchezMartínez et al., 2013; Scherrer and Erjavec, 2013; Ljubeši´c et al., 2016) or dialectal data (Scherrer and Ljubeši´c, 2016). This is conceptually very similar to our approach, except that we substitute the classical SMT algorithms for neural networks. Indeed, our models can be seen as a form of character-based neural MT (Cho et al., 2014). Neural networks have rarely been applied to 339 historical spelling normalization so far. Azawi et al. (2013) normalize old Bible text using bidirectional LSTMs with a layer that performs alignment between input and output wordforms. Bollmann and Søgaard (2016) also use bi-LSTMs to frame spelling normalization as a characterbased sequence labelling task, performing character alignment as a preprocessing step. Multi-task learning was shown to be effective for a variety of NLP tasks, such as POS tagging, chunking, named entity recognition (Collobert et al., 2011) or sentence compression (Klerke et al., 2016). It has also been used in encoderdecoder architectures, typically for machine translation (Dong et al., 2015; Luong et al., 2016), though so far not with attentional decoders. 7 Conclusion and Future Work We presented an approach to historical spelling normalization using neural networks with an encoder-decoder architecture, and showed that it consistently outperforms several existing baselines. Encouragingly, our work proves to be fully competitive with the sequence-labeling approach by Bollmann and Søgaard (2016), without requiring a prior character alignment. Specifically, we demonstrated the aptitude of multi-task learning to mitigate the shortage of training data for the named task. We included a multifaceted analysis of the effects that MTL introduces to our models and the resemblance that it bears to attention mechanisms. We believe that this analysis is a valuable contribution to the understanding of MTL approaches also beyond spelling normalization, and we are confident that our observations will stimulate further research into the relationship between MTL and attention. Finally, many improvements to the presented approach are conceivable, most notably introducing some form of token context to the model. Currently, we only consider word forms in isolation, which is problematic for ambiguous cases (such as jn, which can normalize to in ‘in’ or ihn ‘him’) and conceivably makes the task harder for others. Reranking the predictions with a language model could be one possible way to improve on this. Ljubeši´c et al. (2016), for example, experiment with segment-based normalization, using a character-based SMT model with character input derived from segments (essentially, token ngrams) instead of single tokens, which also introduces context. Such an approach could also deal with the issue of tokenization differences between the historical and the modern text, which is another challenge often found in datasets of historical text. Acknowledgments Marcel Bollmann was supported by Deutsche Forschungsgemeinschaft (DFG), Grant DI 1558/4. This research is further supported by ERC Starting Grant LOWLANDS No. 313695, as well as by Trygfonden. References Mayce Al Azawi, Muhammad Zeshan Afzal, and Thomas M. Breuel. 2013. Normalizing historical orthography for OCR historical documents using LSTM. In Proceedings of the 2nd International Workshop on Historical Document Imaging and Processing. ACM, pages 80–85. https://doi.org/10.1145/2501115.2501131. R. Harald Baayen, Richard Piepenbrock, and Léon Gulikers. 1995. The CELEX lexical database (Release 2) (CD-ROM). Linguistic Data Consortium, University of Pennsylvania, Philadelphia, PA. https://catalog.ldc.upenn.edu/ldc96l14. Alistair Baron and Paul Rayson. 2008. VARD 2: A tool for dealing with spelling variation in historical corpora. In Proceedings of the Postgraduate Conference in Corpus Linguistics. http://eprints.lancs.ac.uk/41666/. Marcel Bollmann. 2012. (Semi-)automatic normalization of historical texts using distance measures and the Norma tool. In Proceedings of the Second Workshop on Annotation of Corpora for Research in the Humanities (ACRH-2). Lisbon, Portugal. https://www.linguistics.ruhr-unibochum.de/comphist/pub/acrh12.pdf. Marcel Bollmann. 2013. Automatic normalization for linguistic annotation of historical language data. Bochumer Linguistische Arbeitsberichte 13. http://nbnresolving.de/urn/resolver.pl?urn:nbn:de:hebis:30:3310764. Marcel Bollmann and Anders Søgaard. 2016. Improving historical spelling normalization with bidirectional lstms and multi-task learning. In Proceedings of the 26th International Conference on Computational Linguistics (COLING 2016). Osaka, Japan. http://aclweb.org/anthology/C16-1013. Rich Caruana. 1993. Multitask learning: A knowledge-based source of inductive bias. In Proceedings of the 10th International Conference on Machine Learning (ICML). pages 41–48. 340 Rich Caruana. 1998. Multitask learning. In Learning to learn, Springer, pages 95–133. http://dl.acm.org/citation.cfm?id=296635.296645. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of the Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-8). Doha, Qatar, pages 103– 111. http://dx.doi.org/10.3115/v1/W14-4012. François Chollet. 2015. Keras. https://github. com/fchollet/keras. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research 12:2493–2537. http://dl.acm.org/citation.cfm?id=1953048.2078186. Stefanie Dipper and Simone Schultz-Balluff. 2013. The Anselm corpus: Methods and perspectives of a parallel aligned corpus. In Proceedings of the NODALIDA Workshop on Computational Historical Linguistics. http://www.ep.liu.se/ecp/087/003/ecp1387003.pdf. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1723–1732. https://doi.org/10.3115/v1/P15-1166. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. The International Conference on Learning Representations (ICLR) ArXiv:1412.6980. http://arxiv.org/abs/1412.6980. Sigrid Klerke, Yoav Goldberg, and Anders Søgaard. 2016. Improving sentence compression by learning to predict gaze. In Proceedings of NAACLHLT 2016. San Diego, CA, pages 1528–1533. http://dx.doi.org/10.18653/v1/N16-1179. Julia Krasselt, Marcel Bollmann, Stefanie Dipper, and Florian Petran. 2015. Guidelines for normalizing historical German texts. Bochumer Linguistische Arbeitsberichte 15. http://nbnresolving.de/urn/resolver.pl?urn:nbn:de:hebis:30:3419680. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 681–691. https://doi.org/10.18653/v1/N16-1082. Nikola Ljubeši´c, Katja Zupan, Darja Fišer, and Tomaž Erjavec. 2016. Normalising Slovene data: historical texts vs. user-generated content. In Proceedings of the 13th Conference on Natural Language Processing (KONVENS). Bochum, Germany, pages 146–155. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. 4th International Conference on Learning Representations (ICLR 2016) https://arxiv.org/abs/1511.06114v4. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research 9:2579–2605. http://www.jmlr.org/papers/v9/vandermaaten08a.html. Eva Pettersson, Beáta Megyesi, and Jörg Tiedemann. 2013. An SMT approach to automatic annotation of historical text. In Proceedings of the NODALIDA Workshop on Computational Historical Linguistics. Oslo, Norway. http://www.ep.liu.se/ecp/087/005/ecp1387005.pdf. Michael Piotrowski. 2012. Natural Language Processing for Historical Texts. Number 17 in Synthesis Lectures on Human Language Technologies. Morgan & Claypool, San Rafael, CA. http://dx.doi.org/10.2200/s00436ed1v01y201207hlt017. Jordi Porta, José-Luis Sancho, and Javier Gómez. 2013. Edit transducers for spelling variation in Old Spanish. In Proceedings of the NODALIDA Workshop on Computational Historical Linguistics. Oslo, Norway. http://www.ep.liu.se/ecp/087/006/ecp1387006.pdf. Yves Scherrer and Tomaž Erjavec. 2013. Modernizing historical Slovene words with character-based SMT. In Proceedings of the 4th Biennial Workshop on Balto-Slavic Natural Language Processing. Sofia, Bulgaria. https://hal.inria.fr/hal-00838575. Yves Scherrer and Nikola Ljubeši´c. 2016. Automatic normalisation of the Swiss German ArchiMob corpus using character-level machine translation. In Proceedings of the 13th Conference on Natural Language Processing (KONVENS). Bochum, Germany, pages 248–255. http://archiveouverte.unige.ch/unige:90846. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS 2014). 27, pages 3104–3112. Felipe Sánchez-Martínez, Isabel Martínez-Sempere, Xavier Ivars-Ribes, and Rafael C. Carrasco. 2013. An open diachronic corpus of historical Spanish: 341 annotation criteria and automatic modernisation of spelling. http://arxiv.org/abs/1306.3692v1. Martijn Wieling, Jelena Proki´c, and John Nerbonne. 2009. Evaluating the pairwise string alignment of pronunciations. In Proceedings of the EACL 2009 Workshop on Language Technology and Resources for Cultural Heritage, Social Sciences, Humanities, and Education (LaTeCH – SHELT&R 2009). Athens, Greece, pages 26–34. http://dl.acm.org/citation.cfm?id=1642049.1642053. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In JMLR Workshop and Conference Proceedings: Proceedings of the 32nd International Conference on Machine Learning. Lille, France, volume 37, pages 2048–2057. http://proceedings.mlr.press/v37/xuc15.pdf. A Supplementary Material For interested parties, we provide our full evaluation results for each single text in our dataset. Table 3 shows token counts, a rough classification of each text’s dialectal region, and the results for the baseline methods. Table 4 presents the full results for our encoder-decoder models. 342 ID Region Tokens Norma Avg. Perc. Bi-LSTM Tagger BASE MTL B East Central 4,718 79.60% 76.30% 79.20% 78.82% D3 East Central 5,704 79.70% 77.20% 80.10% 81.62% H East Central 8,427 83.00% 78.60% 85.00% 84.32% B2 West Central 9,145 76.20% 74.60% 82.00% 80.12% KÄ1492 West Central 7,332 78.40% 74.80% 81.60% 80.82% KJ1499 West Central 7,330 77.00% 73.50% 84.50% 80.22% N1500 West Central 7,272 77.60% 72.70% 79.00% 78.52% N1509 West Central 7,418 78.40% 74.30% 80.80% 80.02% N1514 West Central 7,412 78.50% 72.20% 79.00% 79.62% St West Central 7,407 73.30% 70.30% 75.50% 73.03% D4 Upper/Central 5,806 76.10% 72.40% 76.50% 76.62% N4 Upper 8,593 79.30% 80.00% 81.80% 82.52% s1496/97 Upper 5,840 81.20% 77.70% 83.00% 82.62% B3 East Upper 6,222 82.30% 79.50% 81.50% 83.02% Hk East Upper 8,690 79.10% 78.20% 80.90% 79.52% M East Upper 8,700 75.20% 72.80% 83.90% 82.72% M2 East Upper 8,729 76.30% 75.10% 76.70% 79.32% M3 East Upper 7,929 79.20% 77.30% 80.40% 81.52% M5 East Upper 4,705 81.60% 76.40% 77.70% 76.92% M6 East Upper 4,632 74.90% 73.70% 75.20% 75.72% M9 East Upper 4,739 81.00% 79.00% 80.40% 79.32% M10 East Upper 4,379 77.20% 76.00% 75.10% 75.92% Me East Upper 4,560 80.20% 76.90% 80.30% 79.12% Sb East Upper 7,218 79.60% 75.70% 80.00% 80.12% T East Upper 8,678 76.00% 73.40% 75.80% 73.43% W East Upper 8,217 77.60% 78.20% 81.40% 80.72% We East Upper 6,661 82.70% 78.60% 81.50% 82.22% Ba North Upper 5,934 79.10% 80.20% 80.70% 80.02% Ba2 North Upper 5,953 80.70% 78.10% 82.50% 82.12% M4 North Upper 8,574 76.70% 75.70% 79.40% 79.32% M7 North Upper 4,638 78.60% 75.60% 78.20% 77.42% M8 North Upper 8,275 79.30% 78.20% 81.10% 80.02% n North Upper 9,191 79.80% 81.90% 84.40% 84.62% N North Upper 13,285 74.00% 71.70% 79.00% 79.42% N2 North Upper 7,058 82.80% 80.30% 84.30% 81.72% N3 North Upper 4,192 78.10% 76.40% 77.60% 77.12% Be West Upper 8,203 74.90% 75.30% 78.80% 77.52% Ka West Upper 12,641 72.80% 75.40% 80.10% 81.62% SG West Upper 7,838 79.70% 78.00% 81.70% 81.12% Sa West Upper 8,668 71.50% 71.90% 76.10% 74.93% Sa2 West Upper 8,834 77.60% 73.50% 79.50% 79.72% St2 West Upper 8,686 72.80% 73.20% 78.20% 79.92% Stu West Upper 8,011 78.00% 76.50% 79.40% 79.62% Le Dutch 7,087 71.30% 65.00% 75.60% 75.12% Average (-B) 7,353 77.89% 76.30% 79.91% 79.56% Table 3: Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using the baseline models (cf. Sec. 4): the Norma tool (Bollmann, 2012), an averaged perceptron model, and a deep biLSTM sequential tagger (Bollmann and Søgaard, 2016). 343 ID Base model Multi-task learning model G B B+F B+F+A G B B+F B+F+A B 76.90% 77.30% 78.40% 82.70% 77.70% 79.50% 81.70% 80.10% D3 81.50% 81.60% 82.70% 83.20% 81.10% 81.70% 82.90% 83.20% H 82.60% 82.90% 84.50% 87.40% 85.00% 85.80% 86.60% 85.20% B2 81.00% 81.20% 82.40% 83.40% 80.00% 80.40% 82.70% 83.00% KÄ1492 83.00% 83.40% 83.60% 84.00% 83.40% 83.70% 85.10% 84.90% KJ1499 81.30% 81.30% 82.00% 84.60% 84.00% 84.00% 83.80% 82.50% N1500 79.50% 80.30% 81.30% 84.00% 82.20% 82.50% 83.60% 82.30% N1509 82.10% 82.40% 83.10% 85.00% 82.80% 83.50% 84.50% 82.80% N1514 80.40% 80.50% 81.10% 83.40% 82.30% 82.80% 84.20% 83.10% St 74.60% 74.60% 76.40% 79.70% 77.60% 77.80% 80.20% 77.70% D4 77.90% 77.20% 79.00% 81.40% 77.00% 77.90% 81.50% 79.90% N4 82.10% 82.30% 82.90% 84.80% 83.10% 83.00% 84.40% 84.00% s1496/97 80.40% 80.10% 81.10% 82.10% 82.30% 82.50% 85.20% 83.90% B3 80.80% 81.20% 82.20% 85.20% 82.70% 83.30% 84.80% 84.50% Hk 77.30% 79.00% 79.40% 82.90% 80.30% 80.40% 81.20% 83.70% M 81.40% 81.50% 82.60% 85.00% 82.90% 82.90% 82.70% 84.00% M2 79.90% 80.50% 81.30% 81.80% 78.80% 77.80% 79.60% 83.20% M3 81.00% 81.10% 82.00% 83.70% 82.80% 82.50% 83.50% 81.70% M5 76.60% 77.10% 79.00% 82.00% 78.20% 78.20% 80.90% 81.50% M6 72.70% 73.80% 75.20% 80.20% 77.30% 79.00% 80.30% 76.60% M9 78.20% 78.50% 79.70% 83.20% 80.70% 79.70% 83.20% 79.60% M10 72.00% 72.40% 73.20% 77.40% 75.70% 76.30% 77.90% 77.80% Me 76.90% 76.50% 78.50% 81.30% 77.30% 79.20% 81.00% 77.40% Sb 78.80% 79.10% 81.30% 81.40% 80.60% 81.00% 84.00% 82.90% T 75.60% 75.10% 77.40% 80.30% 76.90% 78.00% 80.10% 79.50% W 80.80% 81.20% 82.40% 81.90% 80.40% 81.60% 84.40% 84.40% We 77.70% 80.00% 81.80% 84.40% 83.00% 82.70% 83.80% 83.30% Ba 81.00% 80.60% 80.90% 84.00% 80.40% 81.00% 82.60% 81.60% Ba2 79.70% 80.90% 82.00% 84.00% 82.60% 83.30% 85.40% 85.10% M4 78.40% 78.60% 79.90% 81.00% 82.10% 82.20% 82.60% 80.50% M7 74.70% 76.30% 78.60% 82.00% 79.60% 79.90% 82.30% 81.10% M8 80.80% 81.30% 82.50% 85.70% 82.00% 82.50% 84.00% 85.40% n 83.40% 83.40% 84.30% 86.00% 84.90% 86.30% 88.00% 85.50% N 77.40% 77.40% 79.40% 79.80% 80.00% 80.30% 81.50% 80.30% N2 82.00% 82.30% 83.80% 86.40% 82.40% 83.50% 86.60% 85.80% N3 73.60% 74.00% 75.10% 81.20% 76.00% 76.30% 80.30% 78.70% Be 75.50% 75.40% 77.60% 78.10% 78.10% 78.40% 79.70% 80.20% Ka 81.20% 81.20% 81.80% 83.90% 81.20% 83.10% 83.40% 82.30% SG 81.10% 81.90% 83.40% 85.50% 82.60% 84.30% 84.90% 83.00% Sa 76.80% 77.20% 78.10% 80.60% 77.50% 78.00% 79.70% 79.90% Sa2 78.90% 79.70% 80.70% 81.30% 79.70% 81.00% 82.30% 82.30% St2 77.70% 78.10% 79.00% 81.60% 79.60% 79.70% 80.50% 80.60% Stu 77.40% 77.30% 78.30% 82.50% 82.00% 81.80% 83.10% 82.90% Le 77.40% 78.10% 78.20% 79.60% 78.30% 78.60% 79.80% 78.90% Average (-B) 78.91% 79.27% 80.46% 82.72% 80.64% 81.13% 82.76% 82.02% Table 4: Word accuracy on the Anselm dataset, evaluated on the first 1,000 tokens, using our base encoder-decoder model (Sec. 3) and the multi-task model. G = greedy decoding, B = beam-search decoding (with beam size 5), F = lexical filter, A = attentional model. Best results (also taking into account the baseline results from Table 3) shown in bold. 344
2017
31
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 345–354 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1032 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 345–354 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1032 Deep Learning in Semantic Kernel Spaces Danilo Croce Simone Filice Giuseppe Castellucci Roberto Basili Department of Enterprise Engineering University of Roma Tor Vergata, Via del Politecnico 1, 00133, Rome, Italy {croce,filice,basili}@info.uniroma2.it [email protected] Abstract Kernel methods enable the direct usage of structured representations of textual data during language learning and inference tasks. Expressive kernels, such as Tree Kernels, achieve excellent performance in NLP. On the other side, deep neural networks have been demonstrated effective in automatically learning feature representations during training. However, their input is tensor data, i.e., they cannot manage rich structured information. In this paper, we show that expressive kernels and deep neural networks can be combined in a common framework in order to (i) explicitly model structured information and (ii) learn non-linear decision functions. We show that the input layer of a deep architecture can be pre-trained through the application of the Nystr¨om low-rank approximation of kernel spaces. The resulting “kernelized” neural network achieves state-of-the-art accuracy in three different tasks. 1 Introduction Learning for Natural Language Processing (NLP) requires to more or less explicitly account for trees or graphs to express syntactic and semantic information. A straightforward modeling of such information has been obtained in statistical language learning with Tree Kernels (TKs) (Collins and Duffy, 2001), or by means of structured neural models (Hochreiter and Schmidhuber, 1997; Socher et al., 2013). In particular, kernel-based methods (Shawe-Taylor and Cristianini, 2004) have been largely applied in language processing for alleviating the need of complex activities of manual feature engineering (e.g., (Moschitti et al., 2008)). Although ad-hoc features are adopted by many successful approaches to language learning (e.g., (Gildea and Jurafsky, 2002)), kernels provide a natural way to capture textual generalizations directly operating over (possibly complex) linguistic structures. Sequence (Cancedda et al., 2003) or tree kernels (Collins and Duffy, 2001) are of particular interest as the feature space they implicitly generate reflects linguistic patterns. On the other hand, Recursive Neural Networks (Socher et al., 2013) have been shown to learn dense feature representations of the nodes in a structure, thus exploiting similarities between nodes and sub-trees. Also, Long-Short Term Memory (Hochreiter and Schmidhuber, 1997) networks build intermediate representations of sequences, resulting in similarity estimates over sequences and their inner sub-sequences. While such methods are highly effective and reach state-of-the-art results in many tasks, their adoption can be problematic. In kernel-based Support Vector Machine (SVM) the classification model corresponds to the set of support vectors (SVs) and weights justifying the maximal margin hyperplane: the classification cost crucially depends on their number, as classifying a new instance requires a kernel computation against all SVs, making their adoption in large data settings prohibitive. This scalability issue is evident in many NLP and Information Retrieval applications, such as in answer re-ranking in question answering (Severyn et al., 2013; Filice et al., 2016), where the number of SVs is typically very large. Improving the efficiency of kernel-based methods is a largely studied topic. The reduction of computational costs has been early designed by imposing a budget (Dekel and Singer, 2006; Wang and Vucetic, 2010), that is limiting the maximum number of SVs in a model. However, in complex tasks, such methods still require large budgets to reach 345 adequate accuracies. On the other hand, training complex neural networks is also difficult as no common design practice is established against complex data structures. In Levy et al. (2015), a careful analysis of neural word embedding models is carried out and the role of the hyper-parameter estimation is outlined. Different neural architectures result in the same performances, whenever optimal hyper-parameter tuning is applied. In this latter case, no significant difference is observed across different architectures, making the choice between different neural architectures a complex and empirical task. A general approach to the large scale modeling of complex structures is a critical and open problem. A viable and general solution to this scalability issue is provided by the Nystr¨om method (Williams and Seeger, 2001); it allows to approximate the Gram matrix of a kernel function and support the embedding of future input examples into a low-dimensional space. For example, if used over TKs, the Nystr¨om projection corresponds to the embedding of any tree into a lowdimensional vector. In this paper, we show that the Nystr¨om based low-rank embedding of input examples can be used as the early layer of a deep feed-forward neural network. A standard NN back-propagation training can thus be applied to induce non-linear functions in the kernel space. The resulting deep architecture, called Kernel-based Deep Architecture (KDA), is a mathematically justified integration of expressive kernel functions and deep neural architectures, with several advantages: it (i) directly operates over complex non-tensor structures, e.g., trees, without any manual feature or architectural engineering, (ii) achieves a drastic reduction of the computational cost w.r.t. pure kernel methods, and (iii) exploits the non-linearity of NNs to produce accurate models. The experimental evaluation shows that the proposed approach achieves state-of-the-art results in three semantic inference tasks: Semantic Parsing, Question Classification and Community Question Answering. In the rest of the paper, Section 2 surveys some of the investigated kernels. In Section 3 the Nystr¨om methodology and KDA are presented. Experimental evaluations are described in Section 4. Finally, Section 5 derives the conclusions. 2 Kernel-based Semantic Inference In almost all NLP tasks, explicit models of complex syntactic and semantic structures are required, such as in Paraphrase Detection: deciding whether two sentences are valid paraphrases involves learning grammatical rewriting rules, such as semantics preserving mappings among subtrees. Also in Question Answering, the syntactic information about input questions is crucial. While manual feature engineering is always possible, kernel methods on structured representations of data objects, e.g., sentences, have been largely applied. Since Collins and Duffy (2001), sentences can be modeled through their corresponding parse tree, and Tree Kernels (TKs) result in similarity metrics directly operating over tree fragments. Such kernels corresponds to dot products in the (implicit) feature space made of all possible tree fragments (Haussler, 1999). Notice that the number of tree fragments in a tree bank is combinatorial with the number of tree nodes and gives rise to billions of features, i.e., dimensions. In this high-dimensional space, kernel-based algorithms, such as SVMs, can implicitly learn robust prediction models (Shawe-Taylor and Cristianini, 2004), resulting in state-of-the-art approaches in several NLP tasks, e.g., Semantic Role Labeling (Moschitti et al., 2008), Question Classification (Croce et al., 2011) or Paraphrase Identification (Filice et al., 2015). As the feature space generated by the structural kernels depends on the input structures, different tree representations can be adopted to reflect more or less expressive syntactic/semantic feature spaces. While constituency parse trees have been early used (e.g., (Collins and Duffy, 2001)), dependency parse trees correspond to graph structures. TKs usually rely on their tree conversions, where grammatical edge labels corresponds to nodes. An expressive tree representation of dependency graphs is the Grammatical Relation Centered Tree (GRCT). As illustrated in Figure 1, PoS-Tags and grammatical functions correspond to nodes, dominating their associated lexicals. Types of tree kernels. While a variety of TK functions have been studied, e.g., the Partial Tree Kernel (PTK) (Moschitti, 2006), the kernels used in this work model grammatical and semantic information, as triggered respectively by the dependency edge labels and lexical nodes. The latter is exploited through recent results in distributional models of lexical semantics, as proposed in 346 ROOT P . ?::. PRD NMOD PMOD NN field::n NMOD NN football::n NMOD DT a::d IN of::i NN width::n NMOD DT the::d VBZ be::v SBJ WP what::w Figure 1: Grammatical Relation Centered Tree (GRCT) of “What is the width of a football field?” word embedding methods (e.g., (Mikolov et al., 2013; Sahlgren, 2006). In particular, we adopt the Smoothed Partial Tree Kernel (SPTK) described in Croce et al. (2011): it extends the PTK formulation with a similarity function between lexical nodes in a GRCT, i.e., the cosine similarity between word vector representations based on word embeddings. We also use a further extension of the SPTK, called Compositionally Smoothed Partial Tree Kernel (CSPTK) (as in Annesi et al. (2014)). In CSPTK, the lexical information provided by the sentence words is propagated along the nonterminal nodes representing head-modifier dependencies. Figure 2 shows a compositionally-labeled tree, where the similarity function at the nodes can model lexical composition, i.e., capturing contextual information. For example, in the sentence, “What instrument does Hendrix play?”, the role of the word instrument can be fully captured only if its composition with the verb play is considered. The CSPTK applies a composition function between nodes: while several algebraic functions can be adopted to compose two word vectors representing a head/modifier pair, here we refer to a simple additive function that assigns to each (h, m) pair the linear combination of the involved vectors, i.e., (h, m) = Ah + Bm: although simple and efficient, it actually produces very effective CSPTK functions. Complexity. The training phase of an optimal maximum margin algorithm (such as SVM) requires a number of kernel operations that is more than linear (almost O(n2)) with respect to the number of training examples n, as discussed in Chang and Lin (2011). Also the classification phase depends on the size of the input dataset and the intrinsic complexity of the targeted task: classifying a new instance requires to evaluate the kernel function with respect to each support vector. For complex tasks, the number of selected support vectors tends to be very large, and using the resulting model can be impractical. This cost is also problematic as single kernel operations can be very expensive: the cost of evaluating the PTK on a single tree pair is almost linear in the number of nodes in the input trees, as shown in Moschitti (2006). When lexical semantics is considered, as in SPTKs and CSPTKs, it is more than linear in the number of nodes (Croce et al., 2011). 3 Deep Learning in Kernel Spaces 3.1 The Nystr¨om method Given an input training dataset D, a kernel K(oi, oj) is a similarity function over D2 that corresponds to a dot product in the implicit kernel space, i.e., K(oi, oj) = Φ(oi) · Φ(oj). The advantage of kernels is that the projection function Φ(o) = x ∈Rn is never explicitly computed (Shawe-Taylor and Cristianini, 2004). In fact, this operation may be prohibitive when the dimensionality n of the underlying kernel space is extremely large, as for Tree Kernels (Collins and Duffy, 2001). Kernel functions are used by learning algorithms, such as SVM, to operate only implicitly on instances in the kernel space, by never accessing their explicit definition. Let us apply the projection function Φ over all examples from D to derive representations, x denoting the rows of the matrix X. The Gram matrix can always be computed as G = XX⊤, with each single element corresponding to Gij = Φ(oi)Φ(oj) = K(oi, oj). The aim of the Nystr¨om method is to derive a new low-dimensional embedding ˜x in a l-dimensional space, with l ≪n so that ˜G = ˜X ˜X⊤and ˜G ≈G. This is obtained by generating an approximation ˜G of G using a subset of l columns of the matrix, i.e., a selection of a subset L ⊂D of the available examples, called landmarks. Suppose we randomly sample l columns of G, and let C ∈R|D|×l be the matrix of these sampled columns. Then, we can rearrange the columns and rows of G and define X = [X1 X2] such that: G = XX⊤=  W X⊤ 1 X2 X⊤ 2 X1 X⊤ 2 X2  and C =  W X⊤ 2 X1  (1) where W = X⊤ 1 X1, i.e., the subset of G that contains only landmarks. The Nystr¨om approximation can be defined as: G ≈˜G = CW †C⊤ (2) 347 root⟨play::v,*::*⟩ VB play::v nsubj⟨play::v,Hendrix::n⟩ NNP Hendrix::n aux⟨play::v,do::v⟩ VBZ do::v dobj⟨play::v,instrument::n⟩ NN instrument::n det⟨instrument::n,what::w⟩ WDT what::w Figure 2: Compositional Grammatical Relation Centered Tree (CGRCT) of “What instrument does Hendrix play?” where W † denotes the Moore-Penrose inverse of W. The Singular Value Decomposition (SVD) is used to obtain W † as it follows. First, W is decomposed so that W = USV ⊤, where U and V are both orthogonal matrices, and S is a diagonal matrix containing the (non-zero) singular values of W on its diagonal. Since W is symmetric and positive definite W = USU⊤. Then W † = US−1U ⊤= US−1 2 S−1 2 U ⊤and the Equation 2 can be rewritten as G ≈˜G = CUS−1 2 S−1 2 U ⊤C⊤ = (CUS−1 2 )(CUS−1 2 )⊤= ˜X ˜X⊤ (3) Given an input example o ∈D, a new lowdimensional representation ˜x can be thus determined by considering the corresponding item of C as ˜x = cUS−1 2 (4) where c is the vector whose dimensions contain the evaluations of the kernel function between o and each landmark oj ∈L. Therefore, the method produces l-dimensional vectors. If k is the average number of basic operations required during a single kernel computation, the overall cost of a single projection is O(kl + l2), where the first term corresponds to the cost of generating the vector c, while the second term is needed for the matrix multiplications in Equation 4. Typically, the number of landmarks l ranges from hundreds to few thousands and, for complex kernels (such as Tree Kernels), the projection cost can be reduced to O(kl). Several policies have been defined to determine the best selection of landmarks to reduce the Gram Matrix approximation error. In this work the uniform sampling without replacement is adopted, as suggested by Kumar et al. (2012), where this policy has been theoretically and empirically shown to achieve results comparable with other (more complex) selection policies. 3.2 A Kernel-based Deep Architecture The above introduced Nystr¨om representation ˜x of any input example o is linear and can be adopted to feed a neural network architecture. We assume a labeled dataset L = {(o, y) | o ∈D, y ∈Y } being available, where o refers to a generic instance and y is its associated class. In this Section, we define a Multi-Layer Perceptron (MLP) architecture, with a specific Nystr¨om layer based on the Nystr¨om embeddings of Eq. 4. We will refer to this architecture as Kernel-based Deep Architecture (KDA). KDA has an input layer, a Nystr¨om layer, a possibly empty sequence of non-linear hidden layers and a final classification layer, which produces the output. The input layer corresponds to the input vector c, i.e., the row of the C matrix associated to an example o. Notice that, for adopting the KDA, the values of the matrix C should be all available. In the training stage, these values are in general cached. During the classification stage, the c vector corresponding to an example o is directly computed by l kernel computations between o and each one of the l landmarks. The input layer is mapped to the Nystr¨om layer, through the projection in Equation 4. Notice that the embedding provides also the proper weights, defined by US−1 2 , so that the mapping can be expressed through the Nystr¨om matrix HNy = US−1 2 : it corresponds to a pre-trained stage derived through SVD, as discussed in Section 3.1. Equation 4 provides a static definition for HNy whose weights can be left invariant during the neural network training. However, the values of HNy can be made available for the standard back-propagation adjustments applied for training1. Formally, the low-dimensional embedding of an input example o, is ˜x = c HNy = c US−1 2 . The resulting outcome ˜x is the input to one or more non-linear hidden layers. Each t-th hidden layer is realized through a matrix Ht ∈Rht−1×ht and a bias vector bt ∈R1×ht, whereas ht denotes 1In our preliminary experiments, adjustments to the HNy matrix have been tested, but no significant effect was observed. Therefore, no adjustment has been used in any reported experiment, although more in depth exploration is needed on this aspect. 348 the desired hidden layer dimensionality. Clearly, given that HNy ∈Rl×l, h0 = l. The first hidden layer in fact receives in input ˜x = cHNy, that corresponds to t = 0 layer input x0 = ˜x and its computation is formally expressed by x1 = f(x0H1 + b1), where f is a non-linear activation function. In general, the generic t-th layer is modeled as: xt = f(xt−1Ht + bt) (5) The final layer of KDA is the classification layer, realized through the output matrix HO and the output bias vector bO. Their dimensionality depends on the dimensionality of the last hidden layer (called O−1) and the number |Y | of different classes, i.e., HO ∈RhO−1×|Y | and bO ∈R1×|Y |, respectively. In particular, this layer computes a linear classification function with a softmax operator so that ˆy = softmax(xO−1HO + bO). In order to avoid over-fitting, two different regularization schemes are applied. First, the dropout is applied to the input xt of each hidden layer (t ≥1) and to the input xO−1 of the final classifier. Second, a L2 regularization is applied to the norm of each layer2 Ht and HO. Finally, the KDA is trained by optimizing a loss function made of the sum of two factors: first, the cross-entropy function between the gold classes and the predicted ones; second the L2 regularization, whose importance is regulated by a metaparameter λ. The final loss function is thus L(y, ˆy) = X (o,y)∈L y log(ˆy)+λ X H∈{Ht}∪{HO} ||H||2 where ˆy are the softmax values computed by the network and y are the true one-hot encoding values associated with the example from the labeled training dataset L. 4 Empirical Investigation The proposed KDA has been applied adopting the same architecture but with different kernels to three NLP tasks, i.e., Question Classification, Community Question Answering, and Automatic Boundary Detection in Semantic Role Labeling. The Nystr¨om projector has been implemented in the KeLP framework3. The neural network has 2The input layer and the Nystr¨om layer are not modified during the learning process, and they are not regularized. 3http://www.kelp-ml.org been implemented in Tensorflow4, with 2 hidden layers whose dimensionality corresponds to the number of involved Nystr¨om landmarks. The rectified linear unit is the non-linear activation function in each layer. The dropout has been applied in each hidden layer and in the final classification layer. The values of the dropout parameter and the λ parameter of the L2-regularization have been selected from a set of values via grid-search. The Adam optimizer with a learning rate of 0.001 has been applied to minimize the loss function, with a multi-epoch (500) training, each fed with batches of size 256. We adopted an early stop strategy, where the best model was selected according to the performance over the development set. Every performance measure is obtained against a specific sampling of the Nystr¨om landmarks. Results averaged against 5 such samplings are always hereafter reported. 4.1 Question Classification Question Classification (QC) is the task of mapping a question into a closed set of answer types in a Question Answering system. We used the UIUC dataset (Li and Roth, 2006), including a training and test set of 5, 452 and 500 questions, respectively, organized in 6 classes (like ENTITY or HUMAN). TKs resulted very effective, as shown in Croce et al. (2011); Annesi et al. (2014). In Annesi et al. (2014), QC is mapped into a One-vsAll multi-classification schema, where the CSPTK achieves state-of-the-art results of 95%: it acts directly over compositionally labeled trees without relying on any manually designed feature. In order to proof the benefits of the KDA architecture, we generated Nystr¨om representation of the CSPTK kernel function5 with default parameters (i.e., µ = λ = 0.4). The SVM formulation by Chang and Lin (2011), fed with the CSPTK (hereafter KSVM), is here adopted to determine the reachable upper bound in classification quality, i.e., a 95% of accuracy, at higher computational costs. It establishes the state-of-the-art over the UIUC dataset. The resulting model includes 3,873 support vectors: this corresponds to the number of kernel operations required to classify any input test question. The Nystr¨om method based on a number of landmarks ranging from 100 to 1,000 is adopted for modeling input vectors in 4https://www.tensorflow.org/ 5The lexical vectors used in the CSPTK are generated again using the Word2vec tool with a Skip-gram model. 349 the CSPTK kernel space. Results are reported in Table 1: computational saving refers to the percentage of avoided kernel computations with respect to the application of the KSVM to each test instance. To justify the need of the Neural Network, we compared the proposed KDA to an efficient linear SVM that is directly trained over the Nystr¨om embeddings. This SVM implements the Dual Coordinate Descent method (Hsieh et al., 2008) and will be referred as SVMDCD. We also measured the state-of-the-art Convolutional Neural Network6 (CNN) of Kim (2014), achieving the remarkable accuracy of 93.6%. Notice that the linear classifier SVMDCD operating over the approximated kernel space achieves the same classification quality of the CNN when just 1,000 landmarks are considered. KDA improves this results, achieving 94.3% accuracy even with fewer landmarks (only 600), showing the effectiveness of non-linear learning over the Nystr¨om input. Although KSVM improves to 95%, KDA provides a saving of more than 84% kernel computations at classification time. This result is straightforward as it confirms that linguistic information encoded in a tree is important in the analysis of questions and can be used as a pre-training strategy. Figure 3 shows the accuracy curves according to various approximations of the kernel space, i.e., number of landmarks. Table 1: Results in terms of Accuracy and saving in the Question Classification task Model #Land. Accuracy Saving CNN 93.6% KSVM 95.0% 0.0% 100 88.5% (84.1%) 97.4% 200 92.2% (88.7%) 94.8% KDA 400 93.7% (91.6%) 89.7% (SVMDCD) 600 94.3% (92.8%) 84.5% 800 94.3% (93.0%) 79.3% 1,000 94.2% (93.6%) 74.2% 4.2 Community Question-Answering In the SemEval-2016 task 3, participants were asked to automatically provide good answers in a community question answering setting (Nakov et al., 2016). We focused on the subtask A: given a question and a large collection of questioncomment threads created by a user community, 6The deep architecture presented in Kim (2014) outperforms several NN models, including the Recursive Neural Tensor Network or Tree-LSTM presented in (Socher et al., 2013; Tai et al., 2015) which presents a semantic compositionality model that exploits parse trees. 88% 90% 92% 94% 96% 100 200 300 400 500 600 700 800 900 1000 Accuracy # of landmarks CNN KDA KSVM SVM_DCD Figure 3: QC task - accuracy curves w.r.t. the number of landmarks. the task consists in (re-)ranking the comments w.r.t. their utility in answering the question. Subtask A can be modeled as a binary classification problem, where instances are (question, comment) pairs. Each pair generates an example for a binary SVM, where the positive label is associated to a good comment and the negative label refers to potentially useful and bad comments. The classification score achieved over different (question, comment) pairs is used to sort instances and produce the final ranking over comments. The above setting results in a train and test dataset made of 20,340 and 3,270 examples, respectively. In Filice et al. (2016), a Kernel-based SVM classifier (KSVM) achieved state-of-the-art results by adopting a kernel combination that exploited (i) feature vectors containing linguistic similarities between the texts in a pair; (ii) shallow syntactic trees that encode the lexical and morpho-syntactic information shared between text pairs; (iii) feature vectors capturing task-specific information. Table 2: Results in terms of F1 and savings in the Community Question Answering task Model #Land. F1 Saving KSVM 0.644 0.0% ConvKN 0.662 100 0.638 (0.596) 99.1% 200 0.635 (0.627) 98.2% KDA 400 0.657 (0.637) 96.5% (SVMDCD) 600 0.669 (0.645) 94.7% 800 0.680 (0.653) 92.9% 1,000 0.674 (0.644) 91.2% Such model includes 11,322 support vectors. We investigated the KDA architecture, trained by maximizing the F1 measure, based on a Nystr¨om layer initialized using the same kernel functions as KSVM. We varied the Nystr¨om dimensions from 100 to 1,000 landmarks, i.e., a much lower number than the support vectors of KSVM. Table 2 reports the results: very high F1 scores 350 are observed with impressive savings in terms of kernel computations (between 91.2% and 99%). Also on the cQA task, the F1 obtained by the SVMDCD is significantly lower than the KDA one. Moreover, with 800 landmarks KDA achieves the remarkable results of 0.68 of F1, that is the state-of-the-art against other convolutional systems, e.g., ConvKN (Barr´on-Cede˜no et al., 2016): this latter combines convolutional tree kernels with kernels operating on sentence embeddings generated by a convolutional neural network. 4.3 Argument Boundary Detection Semantic Role Labeling (SRL) consists of the detection of the semantic arguments associated with the predicate of a sentence (called Lexical Unit) and their classification into their specific roles (Fillmore, 1985). For example, given the sentence “Bootleggers then copy the film onto hundreds of tapes” the task would be to recognize the verb copy as representing the DUPLICATION frame with roles, CREATOR for Bootleggers, ORIGINAL for the film and GOAL for hundreds of tapes. Argument Boundary Detection (ABD) corresponds to the SRL subtask of detecting the sentence fragments spanning individual roles. In the previous example the phrase “the film” represents a role (i.e., ORIGINAL), while “of tapes” or “film onto hundreds” do not, as they just partially cover one or multiple roles, respectively. The ABD task has been successfully tackled using TKs since Moschitti et al. (2008). It can be modeled as a binary classification task over each parse tree node n, where the argument span reflects words covered by the sub-tree rooted at n. In our experiments, Grammatical Relation Centered Tree (GRCT) derived from dependency grammar (Fig. 4) are employed, as shown in Fig. 5. Each node is considered as a candidate in covering a possible argument. In particular, the structure in Fig. 5a is a positive example. On the contrary, in Fig. 5b the NMOD node only covers the phrase “of tapes”, i.e., a subset of the correct role, and it represents a negative example7. We selected all the sentences whose predicate word (lexical unit) is a verb (they are about 7The nodes of the subtree covering the words to be verified as possible argument are marked with a FE tag. The word evoking the frame and its ancestor nodes are also marked with the LU tag. The other nodes are pruned out, except the ones connecting the LU nodes to the FE ones. Bootleggers then copy the film onto hundreds of tapes NNS RB VBP DT NN IN NNS IN NNS ROOT SUBJ TMP OBJ NMOD NMOD PMOD NMOD PMOD Figure 4: Example of dependency parse tree ROOTLU VBPLU copy::v SBJFE NNSFE bootleggers::n (a) ROOTLU ADV PMOD NMODFE PMODFE NNSFE tape::n INFE of::i NNS hundred::n IN onto::i VBPLU copy::v (b) Figure 5: Conversion from dependency graph to GRCT. Tree in Fig. 5a is a positive example, while in Fig. 5b a negative one. 60,000), from the 1.3 version of the Framenet dataset (Baker et al., 1998). This gives rise to about 1,400,000 sub-trees, i.e., the positive and negative instances. The dataset is split in train and test according to the 90/10 proportion (as in (Johansson and Nugues, 2008)). This size makes the application of a traditional kernel-based method unfeasible, unless a significant instance sub-sampling is performed. We firstly experimented standard SVM learning over a sampled training set of 10,000 examples, a typical size for annotated datasets in computational linguistics tasks. We adopted the Smoothed Partial Tree Kernel (Croce et al., 2011) with standard parameters (i.e., µ = λ = 0.4) and lexical nodes expressed through 250-dimensional vectors obtained by applying Word2Vec (Mikolov et al., 2013) to the entire Wikipedia. When trained over this 10k instances dataset, the kernel-based SVM (KSVM) achieves an F1 of 70.2%, over the same test set used in Croce and Basili (2016) that includes 146,399 examples. The KSVM learning produces a model including 2, 994 support vectors, i.e., the number of kernel operations required to classify each new test instance. We then apply the Nystr¨om linearization to a larger dataset made of 100k examples, and trained a classifier using both the Dual Coordinate Descent method (Hsieh et al., 2008), SVMDCD, and the KDA proposed in this work. Table 3 presents the results in terms of F1 and saved kernel operation. Although SVMDCD with 500 landmarks already achieves 0.713 F1, a score higher than KSVM, it is signif351 0,50 0,55 0,60 0,65 0,70 0,75 0,80 50 100 200 300 400 500 F1 # of landmarks KDA KSVM SVM_DCD Figure 6: ABD task: F1 measure curves w.r.t. the number of landmarks. icantly improved by the KDA. KDA achieves up to 0.76 F1 with only 400 landmarks, resulting in a huge step forward w.r.t. the KSVM. This result is straightforward considering (i) the reduction of required kernel operations, i.e., more than 86% are saved and (ii) the quality achieved since 100 landmarks (i.e., 0.711, higher than the KSVM). Table 3: Results in terms of F1 and saving in the Argument Boundary Detection task. Model Land. Tr.Size F1 Saving KSVM 10k 0.702 0.0% 100 100k 0.711 (0.618) 96.7% KDA 200 100k 0.737 (0.661) 93.3% (SVMDCD) 300 100k 0.753 (0.686) 90.0% 400 100k 0.760 (0.704) 86.6% 500 100k 0.754 (0.713) 83.3% 5 Discussion and Conclusions In this work, we promoted a methodology to embed structured linguistic information within NNs, according to mathematically rich semantic similarity models, based on kernel functions. Structured data, such as trees, are transformed into dense vectors according to the Nystr¨om methodology, and the NN is effective in capturing nonlinearities in these representations, but still improving generalization at a reasonable complexity. At the best our knowledge, this work is one of the few attempts to systematically integrate linguistic kernels within a deep neural network architecture. The problem of combining such methodologies has been studied in specific works, such as (Baldi et al., 2011; Cho and Saul, 2009; Yu et al., 2009). In Baldi et al. (2011) the authors propose a hybrid classifier, for bridging kernel methods and neural networks. In particular, they use the output of a kernelized k-nearest neighbors algorithm as input to a neural network. Cho and Saul (2009) introduced a family of kernel functions that mimic the computation of large multilayer neural networks. However, such kernels can be applied only on vector inputs. In Yu et al. (2009), deep neural networks for rapid visual recognition are trained with a novel regularization method taking advantage of kernels as an oracle representing prior knowledge. The authors transform the kernel regularizer into a loss function and carry out the neural network training by gradient descent. In Zhuang et al. (2011) a different approach has been promoted: a multiple (two) layer architecture of kernel functions, inspired by neural networks, is studied to find the best kernel combination in a Multiple Kernel Learning setting. In Mairal et al. (2014) the invariance properties of convolutional neural networks (LeCun et al., 1998) are modeled through kernel functions, resulting in a Convolutional Kernel Network. Other effort for combining NNs and kernel methods is described in Tymoshenko et al. (2016), where a SVM adopts a tree kernels combinations with embeddings learned through a CNN. The approach here discussed departs from previous approaches in different aspects. First, a general framework is promoted: it is largely applicable to any complex kernel, e.g., structural kernels or combinations of them. The efficiency of the Nystr¨om methodology encourages its adoption, especially when complex kernel computations are required. Notice that other low-dimensional approximations of kernel functions have been studied, as for example the randomized feature mappings proposed in Rahimi and Recht (2008). However, these assume that (i) instances have vectorial form and (ii) shift-invariant kernels are adopted. The Nystr¨om method adopted here does not suffer of such limitations: as our target is the application to structured (linguistic) data, more general kernels, i.e., non-shift-invariant convolution kernels are needed. Given the Nystr¨om approximation, the learning setting corresponds to a general well-known neural network architecture, i.e., a multilayer perceptron, and does not require any manual feature engineering or the design of ad-hoc network architectures. The success in three different tasks confirms its large applicability without major changes or adaptations. Second, we propose a novel learning strategy, as the capability of kernel methods to represent complex search spaces is combined with the ability of neural networks to find non-linear so352 lutions to complex tasks. Last, the suggested KDA framework is fully scalable, as (i) the network can be parallelized on multiple machines, and (ii) the computation of the Nystr¨om reconstruction vector c can be easily parallelized on multiple processing units, ideally l, as each unit can compute one ci value. Future work will address experimentations with larger scale datasets; moreover, it is interesting to experiment with more landmarks in order to better understand the trade-off between the representation capacity of the Nystr¨om approximation of the kernel functions and the over-fitting that can be introduced in a neural network architecture. Finally, the optimization of the KDA methodology through the suitable parallelization on multicore architectures, as well as the exploration of mechanisms for the dynamic reconstruction of kernel spaces (e.g., operating over HNy) also constitute interesting future research directions on this topic. References Paolo Annesi, Danilo Croce, and Roberto Basili. 2014. Semantic compositionality in tree kernels. In Proceedings of CIKM 2014. ACM. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proc. of COLING-ACL. Montreal, Canada. Pierre Baldi, Chloe Azencott, and S. Joshua Swamidass. 2011. Bridging the gap between neural network and kernel methods: Applications to drug discovery. In Proceedings of the 20th Italian Workshop on Neural Nets. http://dl.acm.org/citation.cfm?id=1940632.1940635. Alberto Barr´on-Cede˜no, Giovanni Da San Martino, Shafiq Joty, Alessandro Moschitti, Fahad AlObaidli, Salvatore Romeo, Kateryna Tymoshenko, and Antonio Uva. 2016. ConvKN at SemEval-2016 task 3: Answer and question selection for question answering on arabic and english fora. In Proceedings of SemEval-2016. Nicola Cancedda, ´Eric Gaussier, Cyril Goutte, and Jean-Michel Renders. 2003. Word-sequence kernels. Journal of Machine Learning Research 3:1059–1082. Chih-Chung Chang and Chih-Jen Lin. 2011. Libsvm: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3):27:1–27:27. https://doi.org/10.1145/1961189.1961199. Youngmin Cho and Lawrence K. Saul. 2009. Kernel methods for deep learning. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, Curran Associates, Inc., pages 342–350. http://papers.nips.cc/paper/3628-kernelmethods-for-deep-learning.pdf. Michael Collins and Nigel Duffy. 2001. Convolution kernels for natural language. In Proceedings of Neural Information Processing Systems (NIPS’2001). pages 625–632. Danilo Croce and Roberto Basili. 2016. Large-scale kernel-based language learning through the ensemble nystrom methods. In Proceedings of ECIR 2016. Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured lexical similarity via convolution kernels on dependency trees. In Proceedings of EMNLP ’11. pages 1034–1046. Ofer Dekel and Yoram Singer. 2006. Support vector machines on a budget. In NIPS. MIT Press, pages 345–352. Simone Filice, Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2016. KeLP at SemEval-2016 task 3: Learning semantic relations between questions and comments. In Proceedings of SemEval ’16. Simone Filice, Giovanni Da San Martino, and Alessandro Moschitti. 2015. Structural representations for learning relations between pairs of texts. In Proceedings of ACL 2015. Beijing, China, pages 1003– 1013. http://www.aclweb.org/anthology/P15-1097. Charles J. Fillmore. 1985. Frames and the semantics of understanding. Quaderni di Semantica 6(2):222– 254. Daniel Gildea and Daniel Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics 28(3):245–288. David Haussler. 1999. Convolution kernels on discrete structures. In Technical Report UCS-CRL-99-10. University of California, Santa Cruz. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735– 1780. Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S. Sathiya Keerthi, and S. Sundararajan. 2008. A dual coordinate descent method for large-scale linear svm. In Proceedings of the ICML 2008. ACM, pages 408–415. Richard Johansson and Pierre Nugues. 2008. The effect of syntactic representation on semantic role labeling. In Proceedings of COLING. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings EMNLP 2014. Doha, Qatar, pages 1746–1751. Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar. 2012. Sampling methods for the nystr¨om method. J. Mach. Learn. Res. 13:981–1006. 353 Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. Proc. of the IEEE 86(11). Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics 3:211–225. https://transacl.org/ojs/index.php/tacl/article/view/570. Xin Li and Dan Roth. 2006. Learning question classifiers: the role of semantic information. Natural Language Engineering 12(3):229–249. Julien Mairal, Piotr Koniusz, Zaid Harchaoui, and Cordelia Schmid. 2014. Convolutional kernel networks. In Advances in Neural Information Processing Systems. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781. Alessandro Moschitti. 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In ECML. Berlin, Germany. Alessandro Moschitti, Daniele Pighin, and Robert Basili. 2008. Tree kernels for semantic role labeling. Computational Linguistics 34. Preslav Nakov, Llu´ıs M`arquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, Abed Alhakim Freihat, Jim Glass, and Bilal Randeree. 2016. SemEval-2016 task 3: Community question answering. In Proceedings of SemEval-2016. Ali Rahimi and Benjamin Recht. 2008. Random features for large-scale kernel machines. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, Advances in Neural Information Processing Systems 20, Curran Associates, Inc., pages 1177– 1184. http://papers.nips.cc/paper/3182-randomfeatures-for-large-scale-kernel-machines.pdf. Magnus Sahlgren. 2006. The Word-Space Model. Ph.D. thesis, Stockholm University. Aliaksei Severyn, Massimo Nicosia, and Alessandro Moschitti. 2013. Building structures from classifiers for passage reranking. ACM, New York, NY, USA, CIKM ’13, pages 969–978. John Shawe-Taylor and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press, New York, NY, USA. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP ’13. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers. pages 1556–1566. http://aclweb.org/anthology/P/P15/P15-1150.pdf. Kateryna Tymoshenko, Daniele Bonadiman, and Alessandro Moschitti. 2016. Convolutional neural networks vs. convolution kernels: Feature engineering for answer sentence reranking. In Proceedings of NAACL 2016. http://www.aclweb.org/anthology/N16-1152. Zhuang Wang and Slobodan Vucetic. 2010. Online passive-aggressive algorithms on a budget. Journal of Machine Learning Research - Proceedings Track 9:908–915. Christopher K. I. Williams and Matthias Seeger. 2001. Using the nystr¨om method to speed up kernel machines. In Proceedings of NIPS 2000. Kai Yu, Wei Xu, and Yihong Gong. 2009. Deep learning with kernel regularization for visual recognition. In Advances in Neural Information Processing Systems 21, Curran Associates, Inc., pages 1889–1896. Jinfeng Zhuang, Ivor W. Tsang, and Steven C. H. Hoi. 2011. Two-layer multiple kernel learning. In AISTATS. JMLR.org, volume 15 of JMLR Proceedings, pages 909–917. 354
2017
32
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 355–365 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1033 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 355–365 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1033 Topically Driven Neural Language Model Jey Han Lau1,2 Timothy Baldwin2 Trevor Cohn2 1 IBM Research 2 School of Computing and Information Systems, The University of Melbourne [email protected], [email protected], [email protected] Abstract Language models are typically applied at the sentence level, without access to the broader document context. We present a neural language model that incorporates document context in the form of a topic model-like architecture, thus providing a succinct representation of the broader document context outside of the current sentence. Experiments over a range of datasets demonstrate that our model outperforms a pure sentence-based model in terms of language model perplexity, and leads to topics that are potentially more coherent than those produced by a standard LDA topic model. Our model also has the ability to generate related sentences for a topic, providing another way to interpret topics. 1 Introduction Topic models provide a powerful tool for extracting the macro-level content structure of a document collection in the form of the latent topics (usually in the form of multinomial distributions over terms), with a plethora of applications in NLP (Hall et al., 2008; Newman et al., 2010a; Wang and McCallum, 2006). A myriad of variants of the classical LDA method (Blei et al., 2003) have been proposed, including recent work on neural topic models (Cao et al., 2015; Wan et al., 2012; Larochelle and Lauly, 2012; Hinton and Salakhutdinov, 2009). Separately, language models have long been a foundational component of any NLP task involving generation or textual normalisation of a noisy input (including speech, OCR and the processing of social media text). The primary purpose of a language model is to predict the probability of a span of text, traditionally at the sentence level, under the assumption that sentences are independent of one another, although recent work has started using broader local context such as the preceding sentences (Wang and Cho, 2016; Ji et al., 2016). In this paper, we combine the benefits of a topic model and language model in proposing a topically-driven language model, whereby we jointly learn topics and word sequence information. This allows us to both sensitise the predictions of the language model to the larger document narrative using topics, and to generate topics which are better sensitised to local context and are hence more coherent and interpretable. Our model has two components: a language model and a topic model. We implement both components using neural networks, and train them jointly by treating each component as a sub-task in a multi-task learning setting. We show that our model is superior to other language models that leverage additional context, and that the generated topics are potentially more coherent than LDA topics. The architecture of the model provides an extra dimensionality of topic interpretability, in supporting the generation of sentences from a topic (or mix of topics). It is also highly flexible, in its ability to be supervised and incorporate side information, which we show to further improve language model performance. An open source implementation of our model is available at: https://github.com/jhlau/ topically-driven-language-model. 2 Related Work Griffiths et al. (2004) propose a model that learns topics and word dependencies using a Bayesian framework. Word generation is driven by either LDA or an HMM. For LDA, a word is generated based on a sampled topic in the document. For the 355 Document context (n x e) Topic input A (k x a) Topic output B (k x b) Softmax Convolutional Max-over-time pooling Fully connected with softmax output Attention distribution p Topic model output Document-topic representation s neural Language model output Modern network approach network Language model Topic model x g lstm lstm lstm Neural Networks are a computational approach which is based on Document vector d Figure 1: Architecture of tdlm. Scope of the models are denoted by dotted lines: blue line denotes the scope of the topic model, red the language model. HMM, a word is conditioned on previous words. A key difference over our model is that their language model is driven by an HMM, which uses a fixed window and is therefore unable to track longrange dependencies. Cao et al. (2015) relate the topic model view of documents and words — documents having a multinomial distribution over topics and topics having a multinomial distributional over words — from a neural network perspective by embedding these relationships in differentiable functions. With that, the model lost the stochasticity and Bayesian inference of LDA but gained non-linear complex representations. The authors further propose extensions to the model to do supervised learning where document labels are given. Wang and Cho (2016) and Ji et al. (2016) relax the sentence independence assumption in language modelling, and use preceeding sentences as additional context. By treating words in preceeding sentences as a bag of words, Wang and Cho (2016) use an attentional mechanism to focus on these words when predicting the next word. The authors show that the incorporation of additional context helps language models. 3 Architecture The architecture of the proposed topically-driven language model (henceforth “tdlm”) is illustrated in Figure 1. There are two components in tdlm: a language model and a topic model. The language model is designed to capture word relations in sentences, while the topic model learns topical information in documents. The topic model works like an auto-encoder, where it is given the document words as input and optimised to predict them. The topic model takes in word embeddings of a document and generates a document vector using a convolutional network. Given the document vector, we associate it with the topics via an attention scheme to compute a weighted mean of topic vectors, which is then used to predict a word in the document. The language model is a standard LSTM language model (Hochreiter and Schmidhuber, 1997; Mikolov et al., 2010), but it incorporates the weighted topic vector generated by the topic model to predict succeeding words. 356 Marrying the language and topic models allows the language model to be topically driven, i.e. it models not just word contexts but also the document context where the sentence occurs, in the form of topics. 3.1 Topic Model Component Let xi ∈Re be the e-dimensional word vector for the i-th word in the document. A document of n words is represented as a concatenation of its word vectors: x1:n = x1 ⊕x2 ⊕... ⊕xn where ⊕denotes the concatenation operator. We use a number of convolutional filters to process the word vectors, but for clarity we will explain the network with one filter. Let wv ∈Reh be a convolutional filter which we apply to a window of h words to generate a feature. A feature ci for a window of words xi:i+h−1 is given as follows: ci = I(w⊺ vxi:i+h−1 + bv) where bv is a bias term and I is the identity function.1 A feature map c is a collection of features computed from all windows of words: c = [c1, c2, ..., cn−h+1] where c ∈Rn−h+1. To capture the most salient features in c, we apply a max-over-time pooling operation (Collobert et al., 2011), yielding a scalar: d = max i ci In the case where we use a filters, we have d ∈Ra, and this constitutes the vector representation of the document generated by the convolutional and max-over-time pooling network. The topic vectors are stored in two lookup tables A ∈Rk×a (input vector) and B ∈Rk×b (output vector), where k is the number of topics, and a and b are the dimensions of the topic vectors. To align the document vector d with the topics, we compute an attention vector which is used to 1A non-linear function is typically used here, but preliminary experiments suggest that the identity function works best for tdlm. compute a document-topic representation:2 p = softmax(Ad) (1) s = B⊺p (2) where p ∈Rk and s ∈Rb. Intuitively, s is a weighted mean of topic vectors, with the weighting given by the attention p. This is inspired by the generative process of LDA, whereby documents are defined as having a multinomial distribution over topics. Finally s is connected to a dense layer with softmax output to predict each word in the document, where each word is generated independently as a unigram bag-of-words, and the model is optimised using categorical cross-entropy loss. In practice, to improve efficiency we compute loss for predicting a sequence of m1 words in the document, where m1 is a hyper-parameter. 3.2 Language Model Component The language model is implemented using LSTM units (Hochreiter and Schmidhuber, 1997): it = σ(Wivt + Uiht−1 + bi) ft = σ(Wfvt + Ufht−1 + bf) ot = σ(Wovt + Uoht−1 + bi) ˆct = tanh(Wcvt + Ucht−1 + bc) ct = ft ⊙ct−1 + it ⊙ˆct ht = ot ⊙tanh(ct) where ⊙denotes element-wise product; it, ft, ot are the input, forget and output activations respectively at time step t; and vt, ht and ct are the input word embedding, LSTM hidden state, and cell state, respectively. Hereinafter W, U and b are used to refer to the model parameters. Traditionally, a language model operates at the sentence level, predicting the next word given its history of words in the sentence. The language model of tdlm incorporates topical information by assimilating the document-topic representation (s) with the hidden output of the LSTM (ht) at each time step t. To prevent tdlm from memorising the next word via the topic model network, we exclude the current sentence from the document context. 2The attention mechanism was inspired by memory networks (Graves et al., 2014; Weston et al., 2014; Sukhbaatar et al., 2015; Tran et al., 2016). We explored various attention styles (including traditional schemes which use one vector for a topic), but found this approach to work best. 357 We use a gating unit similar to a GRU (Cho et al., 2014; Chung et al., 2014) to allow tdlm to learn the degree of influence of topical information on the language model: zt = σ(Wzs + Uzht + bz) rt = σ(Wrs + Urht + br) ˆht = tanh(Whs + Uh(rt ⊙ht) + bh) h′ t = (1 −zt) ⊙ht + zt ⊙ˆht (3) where zt and rt are the update and reset gate activations respectively at timestep t. The new hidden state h′ t is connected to a dense layer with linear transformation and softmax output to predict the next word, and the model is optimised using standard categorical cross-entropy loss. 3.3 Training and Regularisation tdlm is trained using minibatches and SGD.3 For the language model, a minibatch consists of a batch of sentences, while for the topic model it is a batch of documents (each predicting a sequence of m1 words). We treat the language and topic models as subtasks in a multi-task learning setting, and train them jointly using categorical cross-entropy loss. Most parameters in the topic model are shared by the language model, as illustrated by their scopes (dotted lines) in Figure 1. Hyper-parameters of tdlm are detailed in Table 1. Word embeddings for the topic model and language model components are not shared, although their dimensions are the same (e).4 For m1, m2 and m3, sequences/documents shorter than these thresholds are padded. Sentences longer than m2 are broken into multiple sequences, and documents longer than m3 are truncated. Optimal hyper-parameter settings are tuned using the development set; the presented values are used for experiments in Sections 4 and 5. To regularise tdlm, we use dropout regularisation (Srivastava et al., 2014). We apply dropout to d and s in the topic model, and to the input word embedding and hidden output of the LSTM in the language model (Pham et al., 2013; Zaremba et al., 2014). 4 Language Model Evaluation We use standard language model perplexity as the evaluation metric. In terms of dataset, we use doc3We use Adam as the optimiser (Kingma and Ba, 2014). 4Word embeddings are updated during training. ument collections from 3 sources: APNEWS, IMDB and BNC. APNEWS is a collection of Associated Press5 news articles from 2009 to 2016. IMDB is a set of movie reviews collected by Maas et al. (2011). BNC is the written portion of the British National Corpus (BNC Consortium, 2007), which contains excerpts from journals, books, letters, essays, memoranda, news and other types of text. For APNEWS and BNC, we randomly sub-sample a set of documents for our experiments. For preprocessing, we tokenise words and sentences using Stanford CoreNLP (Klein and Manning, 2003). We lowercase all word tokens, filter word types that occur less than 10 times, and exclude the top 0.1% most frequent word types.6 We additionally remove stopwords for the topic model document context.7 All datasets are partitioned into training, development and test sets; preprocessed dataset statistics are presented in Table 2. We tune hyper-parameters of tdlm based on development set language model perplexity. In general, we find that optimal settings are fairly robust across collections, with the exception of m3, as document length is collection dependent; optimal hyper-parameter values are given in Table 1. In terms of LSTM size, we explore 2 settings: a small model with 1 LSTM layer and 600 hidden units, and a large model with 2 layers and 900 hidden units.8 For the topic number, we experiment with 50, 100 and 150 topics. Word embeddings are pre-trained 300-dimension word2vec Google News vectors.9 For comparison, we compare tdlm with:10 vanilla-lstm: A standard LSTM language model, using the same tdlm hyper-parameters where applicable. This is the baseline model. lclm: A larger context language model that incorporates context from preceding sentences (Wang and Cho, 2016), by treating the preceding sentence as a bag of words, and using an 5https://www.ap.org/en-gb/. 6For the topic model, we remove word tokens that correspond to these filtered word types; for the language model we represent them as ⟨unk⟩tokens (as for unseen words in test). 7We use Mallet’s stopword list: https://github. com/mimno/Mallet/tree/master/stoplists. 8Multi-layer LSTMs are vanilla stacked LSTMs without skip connections (Gers and Schmidhuber, 2000) or depthgating (Yao et al., 2015). 9https://code.google.com/archive/p/ word2vec/. 10Note that all models use the same pre-trained word2vec vectors. 358 HyperValue Description parameter m1 3 Output sequence length for topic model m2 30 Sequence length for language model m3 300,150,500 Maximum document length nbatch 64 Minibatch size nlayer 1,2 Number of LSTM layers nhidden 600,900 LSTM hidden size nepoch 10 Number of training epochs k 100,150,200 Number of topics e 300 Word embedding size h 2 Convolutional filter width a 20 Topic input vector size or number of features for convolutional filter b 50 Topic output vector size l 0.001 Learning rate of optimiser p1 0.4 Topic model dropout keep probability p2 0.6 Language model dropout keep probability Table 1: tdlm hyper-parameters; we experiment with 2 LSTM settings and 3 topic numbers, and m3 varies across the three domains (APNEWS, IMDB, and BNC). Collection Training Development Test #Docs #Tokens #Docs #Tokens #Docs #Tokens APNEWS 50K 15M 2K 0.6M 2K 0.6M IMDB 75K 20M 12.5K 0.3M 12.5K 0.3M BNC 15K 18M 1K 1M 1K 1M Table 2: Preprocessed dataset statistics. attentional mechanism when predicting the next word. An additional hyper-parameter in lclm is the number of preceeding sentences to incorporate, which we tune based on a development set (to 4 sentences in each case). All other hyperparameters (such as nbatch, e, nepoch, k2) are the same as tdlm. lstm+lda: A standard LSTM language model that incorporates LDA topic information. We first train an LDA model (Blei et al., 2003; Griffiths and Steyvers, 2004) to learn 50/100/150 topics for APNEWS, IMDB and BNC.11 For a document, the LSTM incorporates the LDA topic distribution (q) by concatenating it with the output hidden state (ht) to predict the next word (i.e. h′ t = ht ⊕q). That is, it incorporates topical information into the language model, but unlike tdlm the language model and topic model are trained separately. We present language model perplexity performance in Table 3. All models outperform the baseline vanilla-lstm, with tdlm performing the 11Based on Gibbs sampling; α = 0.1, β = 0.01. best across all collections. lclm is competitive over the BNC, although the superiority of tdlm for the other collections is substantial. lstm+lda performs relatively well over APNEWS and IMDB, but very poorly over BNC. The strong performance of tdlm over lclm suggests that compressing document context into topics benefits language modelling more than using extra context words directly.12 Overall, our results show that topical information can help language modelling and that joint inference of topic and language model produces the best results. 5 Topic Model Evaluation We saw that tdlm performs well as a language model, but it is also a topic model, and like LDA it produces: (1) a probability distribution over topics for each document (Equation (1)); and (2) a probability distribution over word types for each topic. 12The context size of lclm (4 sentences) is technically smaller than tdlm (full document), however, note that increasing the context size does not benefit lclm, as the context size of 4 gives the best performance. 359 Domain LSTM Size vanillalclm lstm+lda tdlm lstm 50 100 150 50 100 150 APNEWS small 64.13 54.18 57.05 55.52 54.83 53.00 52.75 52.65 large 58.89 50.63 52.72 50.75 50.17 48.96 48.97 48.21 IMDB small 72.14 67.78 69.58 69.64 69.62 63.67 63.45 63.82 large 66.47 67.86 63.48 63.04 62.78 58.99 59.04 58.59 BNC small 102.89 87.47 96.42 96.50 96.38 87.42 85.99 86.43 large 94.23 80.68 88.42 87.77 87.28 82.62 81.83 80.58 Table 3: Language model perplexity performance of all models over APNEWS, IMDB and BNC. Boldface indicates best performance in each row. lda ntm tdlm-small tdlm-large 0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 (a) APNEWS lda ntm tdlm-small tdlm-large 0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 (b) IMDB lda ntm tdlm-small tdlm-large 0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 (c) BNC Figure 2: Boxplots of topic coherence of all models; number of topics = 100. Recall that s is a weighted mean of topic vectors for a document (Equation (2)). Generating the vocabulary distribution for a particular topic is therefore trivial: we can do so by treating s as having maximum weight (1.0) for the topic of interest, and no weight (0.0) for all other topics. Let Bt denote the topic output vector for the t-th topic. To generate the multinomial distribution over word types for the t-th topic, we replace s with Bt before computing the softmax over the vocabulary. Topic models are traditionally evaluated using model perplexity. There are various ways to estimate test perplexity (Wallach et al., 2009), but Chang et al. (2009) show that perplexity does not correlate with the coherence of the generated topics. Newman et al. (2010b); Mimno et al. (2011); Aletras and Stevenson (2013) propose automatic approaches to computing topic coherence, and Lau et al. (2014) summarises these methods to understand their differences. We propose using automatic topic coherence as a means to evaluate the topic model aspect of tdlm. Following Lau et al. (2014), we compute topic coherence using normalised PMI (“NPMI”) scores. Given the top-n words of a topic, coherence is computed based on the sum of pairwise NPMI scores between topic words, where the word probabilities used in the NPMI calculation are based on co-occurrence statistics mined from English Wikipedia with a sliding window (Newman et al., 2010b; Lau et al., 2014).13 Based on the findings of Lau and Baldwin (2016), we average topic coherence over the top5/10/15/20 topic words. To aggregate topic coherence scores for a model, we calculate the mean coherence over topics. In terms of datasets, we use the same document collections (APNEWS, IMDB and BNC) as the language model experiments (Section 4). We use the same hyper-parameter settings for tdlm and do not tune them. For comparison, we use the following topic models: lda: We use a LDA model as a baseline topic model. We use the same LDA models as were used to learn topic distributions for lstm+lda (Section 4). 13We use this toolkit to compute topic coherence: https://github.com/jhlau/topic_ interpretability. 360 Topic No. System Coherence APNEWS IMDB BNC 50 lda .125 .084 .106 ntm .075 .064 .081 tdlm-small .149 .104 .102 tdlm-large .130 .088 .095 100 lda .136 .092 .119 ntm .085 .071 .070 tdlm-small .152 .087 .106 tdlm-large .142 .097 .101 150 lda .134 .094 .119 ntm .078 .075 .072 tdlm-small .147 .085 .100 tdlm-large .145 .091 .104 Table 4: Mean topic coherence of all models over APNEWS, IMDB and BNC. Boldface indicates the best performance for each dataset and topic setting. ntm: ntm is a neural topic model proposed by Cao et al. (2015). The document-topic and topicword multinomials are expressed from a neural network perspective using differentiable functions. Model hyper-parameters are tuned using development loss. Topic model performance is presented in Table 4. There are two models of tdlm (tdlm-small and tdlm-large), which specify the size of its LSTM model (1 layer+600 hidden vs. 2 layers+900 hidden; see Section 4). tdlm achieves encouraging results: it has the best performance over APNEWS, and is competitive over IMDB. lda, however, produces more coherent topics over BNC. Interestingly, coherence appears to increase as the topic number increases for lda, but the trend is less pronounced for tdlm. ntm performs the worst of the 3 topic models, and manual inspection reveals that topics are in general not very interpretable. Overall, the results suggest that tdlm topics are competitive: at best they are more coherent than lda topics, and at worst they are as good as lda topics. To better understand the spread of coherence scores and impact of outliers, we present box plots for all models (number of topics = 100) over the 3 domains in Figure 2. Across all domains, ntm has poor performance and larger spread of scores. The difference between lda and tdlm is small (tdlm > lda in APNEWS, but lda < tdlm in BNC), which is consistent with our previous observation that tdlm topics are competitive with lda topics. Partition #Docs #Tokens Training 9314 2.6M Development 2000 0.5M Test 7532 1.7M Table 5: 20NEWS preprocessed statistics. 6 Extensions One strength of tdlm is its flexibility, owing to it taking the form of a neural network. To showcase this flexibility, we explore two simple extensions of tdlm, where we: (1) build a supervised model using document labels (Section 6.1); and (2) incorporate additional document metadata (Section 6.2). 6.1 Supervised Model In datasets where document labels are known, supervised topic model extensions are designed to leverage the additional information to improve modelling quality. The supervised setting also has an additional advantage in that model evaluation is simpler, since models can be quantitatively assessed via classification accuracy. To incorporate supervised document labels, we treat document classification as another sub-task in tdlm. Given a document and its label, we feed the document through the topic model network to generate the document-topic representation s, and connect it to another dense layer with softmax output to generate the probability distribution over classes. During training, we have additional minibatches for the documents. We start the document classification training after the topic and language models have completed training in each epoch. We use 20NEWS in this experiment, which is a popular dataset for text classification. 20NEWS is a collection of forum-like messages from 20 newsgroups categories. We use the “bydate” version of the dataset, where the train and test partition is separated by a specific date. We sample 2K documents from the training set to create the development set. For preprocessing we tokenise words and sentence using Stanford CoreNLP (Klein and Manning, 2003), and lowercase all words. As with previous experiments (Section 4) we additionally filter low/high frequency word types and stopwords. Preprocessed dataset statistics are presented in Table 5. For comparison, we use the same two topic 361 Topic No. System Accuracy 50 lda .567 ntm .649 tdlm .606 100 lda .581 ntm .639 tdlm .602 150 lda .597 ntm .628 tdlm .601 Table 6: 20NEWS classification accuracy. All models are supervised extensions of the original models. Boldface indicates the best performance for each topic setting. Topic No. Metadata Coherence Perplexity 50 No .128 52.45 Yes .131 51.80 100 No .142 52.14 Yes .139 51.76 150 No .135 52.25 Yes .143 51.58 Table 7: Topic coherence and language model perplexity by incorporating classification tags on APNEWS. Boldface indicates optimal coherence and perplexity performance for each topic setting. models as in Section 5: ntm and lda. Both ntm and lda have natural supervised extensions (Cao et al., 2015; McAuliffe and Blei, 2008) for incorporating document labels. For this task, we tune the model hyper-parameters based on development accuracy.14 Classification accuracy for all models is presented in Table 6. We present tdlm results using only the small setting of LSTM (1 layer + 600 hidden), as we found there is little gain when using a larger LSTM. ntm performs very strongly, outperforming both lda and tdlm by a substantial margin. Comparing lda and tdlm, tdlm achieves better performance, especially when there is a smaller number of topics. Upon inspection of the topics we found that ntm topics are much less coherent than those of lda and tdlm, consistent with our observations from Section 5. 14Most hyper-parameter values for tdlm are similar to those used in the language and topic model experiments; the only exceptions are: a = 80, b = 100, nepoch = 20, m3 = 150. The increase in parameters is unsurprising, as the additional supervision provides more constraint to the model. Figure 3: Scatter plots of tag embeddings (model=150 topics) 6.2 Incorporating Document Metadata In APNEWS, each news article contains additional document metadata, including subject classification tags, such as “General News”, “Accidents and Disasters”, and “Military and Defense”. We present an extension to incorporate document metadata in tdlm to demonstrate its flexibility in integrating this additional information. As some of the documents in our original APNEWS sample were missing tags, we re-sampled a set of APNEWS articles of the same size as our original, all of which have tags. In total, approximately 1500 unique tags can be found among the training articles. To incorporate these tags, we represent each of them as a learnable vector and concatenate it with the document vector before computing the attention distribution. Let zi ∈Rf denote the f-dimension vector for the i-th tag. For the j-th document, we sum up all tags associated with it: e = ntags X i=1 I(i, j)zi where ntags is the total number of unique tags, and function I(i, j) returns 1 is the i-th tag is in the j-th document or 0 otherwise. We compute d as before (Section 3.1), and concatenate it with the summed tag vector: d′ = d ⊕e. We train two versions of tdlm on the new APNEWS dataset: (1) the vanilla version that ignores the tag information; and (2) the extended version which incorporates tag information.15 We exper15Model hyper-parameters are the same as the ones used in the language (Section 4) and topic model (Section 5) experiments. 362 Topic Generated Sentences protesters suspect gunman officers occupy gun arrests suspects shooting officer • police say a suspect in the shooting was shot in the chest and later shot and killed by a police officer . • a police officer shot her in the chest and the man was killed . • police have said four men have been killed in a shooting in suburban london . film awards actress comedy music actor album show nominations movie • it ’s like it ’s not fair to keep a star in a light , ” he says . • but james , a four-time star , is just a ⟨unk⟩. • a ⟨unk⟩adaptation of the movie ” the dark knight rises ” won best picture and he was nominated for best drama for best director of ” ⟨unk⟩, ” which will be presented sunday night . storm snow weather inches flooding rain service winds tornado forecasters • temperatures are forecast to remain above freezing enough to reach a tropical storm or heaviest temperatures . • snowfall totals were one of the busiest in the country . • forecasters say tornado irene ’s strong winds could ease visibility and funnel clouds of snow from snow monday to the mountains . virus nile flu vaccine disease outbreak infected symptoms cough tested • he says the disease was transmitted by an infected person . • ⟨unk⟩says the man ’s symptoms are spread away from the heat . • meanwhile in the ⟨unk⟩, the virus has been common in the mojave desert . Table 8: Generated sentences for APNEWS topics. imented with a few values for the tag vector size (f) and find that a small value works well; in the following experiments we use f = 5. We evaluate the models based on language model perplexity and topic model coherence, and present the results in Table 7.16 In terms of language model perplexity, we see a consistent improvement over different topic settings, suggesting that the incorporation of tags improves modelling. In terms of topic coherence, there is a small but encouraging improvement (with one exception). To investigate whether the vectors learnt for these tags are meaningful, we plot the top-14 most frequent tags in Figure 3.17 The plot seems reasonable: there are a few related tags that are close to each other, e.g. “State government” and “Government and politics”; “Crime” and “Violent Crime”; and “Social issues” and “Social affairs”. 7 Discussion Topics generated by topic models are typically interpreted by way of their top-N highest probability words. In tdlm, we can additionally generate sentences related to the topic, providing another way to understand the topics. To do this, we can constrain the topic vector for the language model to be the topic output vector of a particular topic (Equation (3)). We present 4 topics from a APNEWS model (k = 100; LSTM size = “large”) and 3 randomly generated sentences conditioned on each 16As the vanilla tdlm is trained on the new APNEWS dataset, the numbers are slightly different to those in Tables 3 and 4. 17The 5-dimensional vectors are compressed using PCA. topic in Table 8.18 The generated sentences highlight the content of the topics, providing another interpretable aspect for the topics. These results also reinforce that the language model is driven by topics. 8 Conclusion We propose tdlm, a topically driven neural language model. tdlm has two components: a language model and a topic model, which are jointly trained using a neural network. We demonstrate that tdlm outperforms a state-of-the-art language model that incorporates larger context, and that its topics are potentially more coherent than LDA topics. We additionally propose simple extensions of tdlm to incorporate information such as document labels and metadata, and achieved encouraging results. Acknowledgments We thank the anonymous reviewers for their insightful comments and valuable suggestions. This work was funded in part by the Australian Research Council. References Nikos Aletras and Mark Stevenson. 2013. Evaluating topic coherence using distributional semantics. In Proceedings of the Tenth International Workshop on Computational Semantics (IWCS-10). Potsdam, Germany, pages 13–22. 18Words are sampled with temperature = 0.75. Generation is terminated when a special end symbol is generated or when sentence length is greater than 40 words. 363 David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research 3:993–1022. BNC Consortium. 2007. The British National Corpus, version 3 (BNC XML Edition). Distributed by Oxford University Computing Services on behalf of the BNC Consortium. http://www.natcorp.ox.ac.uk/. Ziqiang Cao, Sujian Li, Yang Liu, Wenjie Li, and Heng Ji. 2015. A novel neural topic model and its supervised extension. In Proceedings of the 29th Annual Conference on Artificial Intelligence (AAAI15). Austin, Texas, pages 2210–2216. Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L. Boyd-Graber, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Advances in Neural Information Processing Systems 21 (NIPS-09). Vancouver, Canada, pages 288–296. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Doha, Qatar, pages 103–111. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS Deep Learning and Representation Learning Workshop. Montreal, Canada, pages 103– 111. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493–2537. Felix A. Gers and J¨urgen Schmidhuber. 2000. Recurrent nets that time and count. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’2000). Como, Italy, pages 198–194. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. CoRR abs/1410.5401. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences 101:5228–5235. Thomas L. Griffiths, Mark Steyvers, David M. Blei, and Joshua B. Tenenbaum. 2004. Integrating topics and syntax. In Advances in Neural Information Processing Systems 17 (NIPS-05). Vancouver, Canada, pages 537–544. David Hall, Daniel Jurafsky, and Christopher D. Manning. 2008. Studying the history of ideas using topic models. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP 2008). Honolulu, USA, pages 363–371. Geoffrey E. Hinton and Ruslan R. Salakhutdinov. 2009. Replicated softmax: an undirected topic model. In Advances in Neural Information Processing Systems 21 (NIPS-09). Vancouver, Canada, pages 1607– 1614. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9:1735– 1780. Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2016. Document context language models. In Proceedings of ICLR-16 Workshop, 2016. Toulon, France. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL 2003). Sapporo, Japan, pages 423–430. Hugo Larochelle and Stanislas Lauly. 2012. A neural autoregressive topic model. In Advances in Neural Information Processing Systems 25. pages 2708– 2716. Jey Han Lau and Timothy Baldwin. 2016. The sensitivity of topic coherence evaluation to topic cardinality. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics — Human Language Technologies (NAACL HLT 2016). San Diego, USA, pages 483– 487. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the EACL (EACL 2014). Gothenburg, Sweden, pages 530–539. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL HLT 2011). Portland, Oregon, USA, pages 142–150. Jon D. McAuliffe and David M. Blei. 2008. Supervised topic models. In Advances in Neural Information Processing Systems 20 (NIPS-08). Vancouver, Canada, pages 121–128. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (INTERSPEECH 2010). Makuhari, Japan, pages 1045– 1048. 364 David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011). Edinburgh, UK, pages 262–272. David Newman, Timothy Baldwin, Lawrence Cavedon, Sarvnaz Karimi, David Martinez, and Justin Zobel. 2010a. Visualizing document collections and search results using topic mapping. Journal of Web Semantics 8(2–3):169–175. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010b. Automatic evaluation of topic coherence. In Proceedings of Human Language Technologies: The 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT 2010). Los Angeles, USA, pages 100–108. Vu Pham, Christopher Kermorvant, and J´erˆome Louradour. 2013. Dropout improves recurrent neural networks for handwriting recognition. CoRR abs/1312.4569. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15:1929–1958. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems 28 (NIPS-15). Montreal, Canada, pages 2440–2448. Ke Tran, Arianna Bisazza, and Christof Monz. 2016. Recurrent memory networks for language modeling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics — Human Language Technologies (NAACL HLT 2016). San Diego, California, pages 321–331. Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. 2009. Evaluation methods for topic models. In Proceedings of the 26th International Conference on Machine Learning (ICML-09). Montreal, Canada, pages 1105–1112. Li Wan, Leo Zhu, and Rob Fergus. 2012. A hybrid neural network-latent topic model. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS-12). La Palma, Canary Islands, pages 1287–1294. Tian Wang and Kyunghyun Cho. 2016. Largercontext language modelling with recurrent neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin, Germany, pages 1319–1329. Xuerui Wang and Andrew McCallum. 2006. Topics over time: a non-Markov continuous-time model of topical trends. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Philadelphia, USA, pages 424–433. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. CoRR abs/1410.3916. Kaisheng Yao, Trevor Cohn, Katerina Vylomova, Kevin Duh, and Chris Dyer. 2015. Depth-gated LSTM. CoRR abs/1508.03790. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. CoRR abs/1409.2329. 365
2017
33
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 366–376 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1034 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 366–376 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1034 Handling Cold-Start Problem in Review Spam Detection by Jointly Embedding Texts and Behaviors Xuepeng Wang1,2, Kang Liu1, and Jun Zhao1,2 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China 2 University of Chinese Academy of Sciences, Beijing, 100049, China {xpwang, kliu, jzhao}@nlpr.ia.ac.cn Abstract Solving the cold-start problem in review spam detection is an urgent and significant task. It can help the on-line review websites to relieve the damage of spammers in time, but has never been investigated by previous work. This paper proposes a novel neural network model to detect review spam for the cold-start problem, by learning to represent the new reviewers’ review with jointly embedded textual and behavioral information. Experimental results prove the proposed model achieves an effective performance and possesses preferable domain-adaptability. It is also applicable to a large-scale dataset in an unsupervised way. 1 Introduction With the rapid growth of products reviews at the web, it has become common for people to read reviews before making a purchase decision. The reviews usually contain abundant consumers’ personal experiences. It has led to a significant influence on financial gains and fame for businesses. Existing studies have shown that an extra halfstar rating on Yelp causes restaurants to sell out 19% points more frequently (Anderson and Magruder, 2012), and a one-star increase in Yelp rating leads to a 5-9 % increase in revenue (Luca, 2011). This, unfortunately, gives strong incentives for imposters (called spammers) to game the system. They post fake reviews or opinions (called review spam) to promote or to discredit some targeted products and services. The news from BBC has shown that around 25% of Yelp reviews could be fake.1 Therefore, it is urgent to detect review s1http://www.bbc.com/news/technology-24299742 pam, to ensure that the online review continues to be trusted. Jindal and Liu (2008) make the first step to detect review spam. Most efforts are devoted to exploring effective linguistic and behavioral features by subsequent work to distinguish such spam from the real reviews. However, to notice such patterns or form behavioral features, developers should take a long time to observe the data, because the features are based on statistics. For instance, the feature activity window proposed by Mukherjee et al. (2013c) is to measure the activity freshness of reviewers. It usually takes several months to count the difference of timestamps between the last and first reviews for reviewers. When the features show themselves finally, some major damages might have already been done. Thus, it is important to design algorithms that can detect review spam as soon as possible, ideally, right after they are posted by the new reviewers. It is a coldstart problem which is the focus of this paper. In this paper, we assume that we must identify fake reviews immediately when a new reviewer posts just one review. Unfortunately, it is very difficult because the available information for detecting fake reviews is very poor. Traditional behavioral features based on the statistics can only work well on users’ abundant behaviors. The more behavioral information obtained, the more effective the traditional behavioral features are (see experiments in Section 3 ). In the scenario of cold-start, a new reviewer only has a behavior: post a review. As a result, we can not get effective behavioral features from the data. Although, the linguistic features of reviews do not need to take much time to form, Mukherjee et al. (2013c) have proved that the linguistic features are not effective enough in detecting real-life fake reviews from the commercial websites, where we also obtain the same observation (the details are shown in Section 3). 366 Therefore, the main difficulty of the cold-start spam problem is that there are no sufficient behaviors of the new reviewers for constructing effective behavioral features. Nevertheless, there is ample textual and behavioral information contained in the abundant reviews posted by the existing reviewers (Figure 1). We could employ behavioral information of existing similar reviewers to a new reviewer to approximate his behavioral features. We argue that a reviewer’s individual characteristics such as background information, motivation, and interactive behavior style have a great influence on a reviewer’s textual and behavioral information. So the textual information and the behavioral information of a reviewer are correlated with each other (similar argument in Li et al. (2016)). For example, the students of the college are likely to choose the youth hostel during summer vacation and tend to comment the room price in their reviews. But the financial analysts on a business trip may tend to choose the business hotel, the environment and service are what they care about in their reviews. To augment the behavioral information of the new reviewers in the cold-start problem, we first try to find the textual information which is similar with that of the new reviewer, from the existing reviews. There are several ways to model the textual information of the review spam, such as Unigram (Mukherjee et al., 2013c), POS (Ott et al., 2011) and LIWC (Linguistic Inquiry and Word Count) (Newman et al., 2003). We employ the CNN (Convolutional Neural Network) to model the review text, which has been proved that it can capture complex global semantic information that is difficult to express using traditional discrete manual features (Ren and Zhang, 2016). Then we employ the behavioral information which is correlated with the found textual information to approximate the behavioral information of the new reviewer. An intuitive approach is to search the most similar existing review for the new review, then take the found reviewer’s behavioral features as the new reviewers’ features (detailed in Section 5.3). However, there is abundant behavioral information in the review graph (Figure 1), it is difficult for the traditional discrete manual behavioral features to record the global behavioral information (Wang et al., 2016). Moreover, the traditional features can not capture the reviewer’s individual characteristics, because there is no explicit characteristic tag available in the review system (experiHŽƚĞůͺϮ HŽƚĞůͺϭ RĞǀŝĞǁͺϭ RĞǀŝĞǁͺϮ RĞǀŝĞǁͺϯ RĞǀŝĞǁͺϰ A B C Figure 1: Part of review graph simplified from Yelp. ments in Section 5.3). So, we propose a neural network model to jointly encode the textual and behavioral information into the review embeddings for detecting the review spam in the cold-start problem. By encoding the review graph structure (Figure 1), the proposed model can record the global footprints of the existing reviewers in an unsupervised way, and further record the reviewers’ latent characteristic information in the footprints. The jointly learnt review embeddings can model the correlation of the reviewers’ textual and behavioral information. When a new reviewer posts a review, the proposed model can represent the review with the similar textual information and the correlated behavioral information encoded in the word embeddings. Finally, the embeddings of the new review are fed into a classifier to identify whether it is spam or not. In summary, our major contributions include: • To our best knowledge, this is the first work that explores the cold-start problem in review spam detection. We qualitatively and quantitatively prove that the traditional linguistic and behavioral features are not effective enough in detecting review spam for the coldstart task. • We propose a neural network model to jointly encode the textual and behavioral information into the review embeddings for the cold-start spam detection task. It is an unsupervised distributional representation model which can learn from large scale unlabeled review data. • Experimental results on two domains (hotel and restaurant) give good confidence that the proposed model performs effectively in the cold-start spam detection task. 2 Related Work Jindal and Liu (2008) make the first step to detect review spam. Subsequent work devoted most 367 efforts to explore effective features and spammerlike clues. Linguistic features: Ott et al. (2011) applied psychological and linguistic clues to identify review spam; Harris (2012) explored several writing style features. Syntactic stylometry for review spam detection was investigated in Feng et al. (2012a); Xu and Zhao (2012) using deep linguistic features for finding deceptive opinion spam; Li et al. (2013) studied the topics in the review spam; Li et al. (2014b) further analyzed the general difference of language usage. Fornaciari and Poesio (2014) proved the effectiveness of the N-grams in detecting deceptive Amazon book reviews. The effectiveness of the N-grams was also explored in Cagnina and Rosso (2015). Li et al. (2014a) proposed a positive-unlabeled learning method based on unigrams and bigrams; Kim et al. (2015) carried out a frame-based deep semantic analysis. Hai et al. (2016) exploited the relatedness of multiple review spam detection tasks and available unlabeled data to address the scarcity of labeled opinion spam data by using linguistic features. Besides, (Ren and Zhang, 2016) proved that the CNN model is more effective than the RNN and the traditional discrete manual linguistic features. Hovy (2016) used N-gram generative models to produce reviews and evaluated their effectiveness. Behavioral features: Lim et al. (2010) analyzed reviewers’ rating behavioral features; Jindal et al. (2010) identified unusual review patterns which can represent suspicious behaviors of reviews; Li et al. (2011) proposed a two-view semisupervised co-training method base on behavioral features. Feng et al. (2012b) study the distributions of individual spammers’ behaviors. The group spammers’ behavioral features were studied in Mukherjee et al. (2012). Temporal patterns of spammers were investigated by Xie et al. (2012), Fei et al. (2013); Li et al. (2015) explored the temporal and spatial patterns. The review graph was analyzed by Wang et al. (2011), Akoglu et al. (2013); Mukherjee et al. (2013a) studied the spamicity of reviewers. Mukherjee et al. (2013c), Mukherjee et al. (2013b) proved that reviewers’ behavioral features are more effective than reviews’ linguistic features for detecting review spam. Based on this conclusion, recently, researchers (Rayana and Akoglu, 2015; KC and Mukherjee, 2016) have put more efforts in employing reviewers’ behavioral features for deFeatures P R F1 A LF 54.5 71.1 61.7 55.9 LF+BF 63.4 52.6 57.5 61.1 LF+BF abundant 69.1 63.5 66.2 67.5 (a) Hotel Features P R F1 A LF 53.8 80.8 64.6 55.8 LF+BF 58.1 61.2 59.6 58.5 LF+BF abundant 56.6 78.2 65.7 59.1 (b) Restaurant Table 1: SVM classification results across linguistic features (LF, bigrams here (Mukherjee et al., 2013b)), behavioral features (BF: RL, RD, MCS (Mukherjee et al., 2013b)) and behavioral features with abundant behavioral information (BF abundant). Both training and testing use balanced data (50:50). tecting review spam, the intuition behind which is to capture the reviewers’ actions and supposes that those reviews written with spammer-like behaviors would be spam. Wang et al. (2016) explored a method to learn the review representation with global behavioral information. Viviani and Pasi (2017) concentrated on the aggregation process with respect to each single veracity feature. 3 Whether Traditional Features are Effective As a new reviewer posted just one review and we have to identify it immediately, the major challenge of the cold-start task is that the available information about the new reviewer is very poor. The new reviewer only provides us with one review record. For most traditional features based on the statistics, they can not form themselves or make no sense, such as the percentage of reviews written at weekends (Li et al., 2015), the entropy of rating distribution of user’s review (Rayana and Akoglu, 2015). To investigate whether traditional features are effective in the cold-start task, we conducted experiments on the Yelp dataset in Mukherjee et al. (2013c). We trained SVM models with different features on the existing reviews posted before January 1, 2012, and tested on the new reviews which just posted by the new reviewers after January 1, 2012. Results are shown in Table 1. 368 3.1 Linguistic Features’ Poor Performance The linguistic features need not take much time to form. But Mukherjee et al. (2013c) have proved that the linguistic features are not effective enough in detecting real-life fake reviews from the commercial websites, compared with the performances on the crowd source datasets (Ott et al., 2011). They showed that the word bigrams perform better than the other linguistic features, such as LIWC (Newman et al., 2003; Pennebaker et al., 2007), part-of-speech sequence patterns (Mukherjee and Liu, 2010), deep syntax (Feng et al., 2012a), information gain (Mukherjee et al., 2013c) and so on. So, we conduct experiments with the word bigrams feature. As shown in Table 1 (a, b) row 1, the word bigrams result in only around 55% in accuracy in both the hotel and restaurant domains. It indicates that the most effective traditional linguistic feature (i.e., the word bigrams) can’t detect the review spam effectively in the cold start task. 3.2 Behavioral Features only Work Well with Abundant Information Because there is not enough available information about the new reviewer, for most traditional behavioral features based on the statistical mechanism, they couldn’t form themselves or make no sense. We investigated the previous work and found that there are three behavioral features can be applied to the cold-start task. They are proposed by Mukherjee et al. (2013b), i.e., 1.Review length (RL) : the length of the new review posted by the new reviewer; 2.Reviewer deviation (RD): the absolute rating deviation of the new reviewer’s review from other reviews on the same business; 3.Maximum content similarity (MCS) : the maximum content similarity (using cosine similarity) between the new reviewer’s review with other reviews on the same business. Table 1 (a, b) row 2 shows the experiment results by the combinations of the bigrams feature and the three behavioral features described above. The behavioral features make around 5% improvement in accuracy in the hotel domain (2.7% in the restaurant domain) as compared with only using bigrams. The accuracy is improved but it is just near 60% in average. It indicates that the traditional features are not effective enough with poor behavioral information. What’s more, the behavioral features cause around 4.6% decrease in F1score and around 19% decrease in Recall in both hotel and restaurant domains. It is obvious that there is more false-positive review spam caused by the behavioral features as compared to only using bigrams. It further indicates that the traditional behavioral features’ discrimination for review spam gets to be weakened by the poor behavioral information. To go a step further, we carried experiments with the three behavioral features which are formed on abundant behavioral information. When the new reviewers continue to post more reviews in after weeks, their behavioral information gets to be more. Then the review system could obtain sufficient data to extract behavior features as compared to the poor information in the cold-start period. So the behavioral features with abundant information make an obvious improvement in accuracy (6.4%) in the hotel domain (Table 1 (a) row 3) as compared with the results in Table 1 (a) row 2. But it is only 0.6% in the restaurant domain. By statistics on the datasets, we found that the new reviewers posted about 54.4 reviews in average after their first post in the hotel domain, but it is only 10 reviews in average for the new reviewers in the restaurant domain. The added behavioral information in the hotel domain is richer than that in the restaurant domain. It indicates that: • the traditional behavioral features can only work well with abundant behavioral information; • the more behavioral information can be obtained, the more effective the traditional behavioral features are. Figure 2: Illustration of our model. 369 4 The Proposed Model The difficulty of detecting review spam in the cold-start task is that the available behavioral information of new reviewers is very poor. The new reviewer just posted one review and we have to filter it out immediately, there is not any historical review provided to us. As we argued, the textual information and the behavioral information of a reviewer are correlated with each other. So, to augment the behavioral information of new reviewers, we try to find the textual information which is similar with that of the new reviewer, from existing reviews. Then we take the behavioral information which is correlated with the found textual information as the most possible behavioral information of the new reviewer. For this purpose, we propose a neural network model to jointly encode the textual and behavioral information into the review embeddings for detecting the review spam in the cold-start problem (shown in Figure 2). When a new reviewer posts a review, the neural network can represent the review with the similar textual information and the correlated behavioral information encoded in the word embeddings. Finally, embeddings of the new review are fed into a classifier to identify whether it is spam or not. 4.1 Behavioral Information Encoding In Figure 1, there is a part of review graph which is simplified from the Yelp website. As it shows, the review graph contains the global behavioral information (footprints) of the existing reviewers. Because the motivations of the spammers and the real reviewers are totally different, the distributions of the behavioral information of them are different (Mukherjee et al., 2013a). There are businesses (even highly reputable ones) paying people to write fake reviews for them to promote their products/services and/or to discredit their competitors (Liu, 2015). So the behavioral footprints of the spammers are decided by the demands of the businesses. But the real reviewers only post reviews to the product or services they have actually experienced. Their behavioral footprints are influenced by their own characteristics. Previous work extracts behavioral features for reviewers from these behavioral information. But it is impractical to the new reviewers in the cold-start task. Moreover, the traditional discrete features can not effectively record the global behavioral information (Wang et al., 2016). Besides, there is no explicit characteristic tag available in the review system, and we need to find a way to record the reviewers’ latent characters information in footprints. Therefore we encode these behavioral information into our model by utilizing an embedding learning model which is similar with TransE (Bordes et al., 2013). TransE is a model which can encode the graph structure, and represent the nodes and edges (head, translation/relation, tail) in low dimension vector space. TransE has been proved that it is good at describing the global information of the graph structure by the work about distributional representation for knowledge base (Guu et al., 2015). We consider that each reviewer in review graph describes the product in his/her own view and writes the review. When we represent the product, reviewer, and review in low dimension vector space, the reviewer embeddings can be taken as a translation vector, which has translated the product embeddings to the review embeddings. So, as shown in Figure 2, we take the products (hotels/restaurants) as the head part of the TransE network in our model, take the reviewers as the translation (relation) part and take the review as the tail part. By learning from the existing large scale unlabeled reviews of the review graph, we can encode the global behavioral information into our model without extracting any traditional behavioral feature, and record reviewers’ latent characteristics information. More formally, we minimize a margin-based criterion over the training set: L = ∑ (β,α,τ)∈S ∑ (β′,α,τ ′)∈S′ max {0, 1 + d(β + α, τ) −d(β′ + α, τ ′)} (1) S denotes the training set of triples (β, α, τ) composed product β (β ∈B, products set (head part)), reviewer α (α ∈A, reviewers set (translation part)) and review text embeddings learnt by the CNN τ (τ ∈T, review texts set (tail part)). S′ = {(β′, α, τ)|β′ ∈B} ∪{(β, α, τ ′)|τ ′ ∈T} (2) The set of corrupted triplets S′ (Equation (2)), is composed of training triplets with either the product or review text replaced by a random chosen one (but not both at the same time). d(β + α, τ) = ∥β + α −τ∥2 2 , s.t. ∥β∥2 2 = ∥α∥2 2 = ∥τ∥2 2 = 1 (3) 370 Domain Hotel Restaurant #reviews 688328 788471 #reviewers 5132 35593 date range 2004.10.23 2012.09.26 2004.10.12 2012.10.02 %before 2012.01.01 99.01% 97.40% Table 2: Yelp Whole Dataset Statistics (Labeled and Unlabeled). d(β + α, τ) is the dissimilarity function with the squared euclidean distance. 4.2 Textual Information Encoding To encode the textual information into our model, we adopt a convolutional neural network (CNN) to learn to represent the existing reviews. By statistics, we find that a review usually refers to several aspects of the products or services. For example, a hotel review may comment the room price, the free WiFi, and the bathroom at the same time. Compared with the recurrent neural network (RNN), the CNN can do a better job of modeling the different aspects of a review. Ren and Zhang (2016) have proved that the CNN can capture complex global semantic information and detect review spam more effectively, compared with traditional discrete manual features and the RNN model. As shown in Figure 2, we take the learnt embeddings τ of reviews by the CNN as the tail part. Specifically, we denote the review text consisting of n words as {w1, w2, ..., wn}, the word embeddings e(wi) ∈RD, D is the word vector dimension. We take the concatenation of the word embeddings in a fixed length window size Z as the input of the linear layer, which is denoted as Ii ∈RD×Z. So the output of the linear layer Hi is calculated by Hk,i = Wk · Ii + bi, where Wk ∈RD×Z is the weight matrix of filter k. We utilize a max pooling layer to get the output of each filter. Then we take tanh as the activation function and concatenate the outputs as the final review embeddings, which is denoted as τi. 4.3 Jointly Information Encoding To model the correlation of the textual and behavioral information, we employ the jointly information encoding. By jointly learning from the global review graph, the textual and behavioral information of existing spammers and real reviewers are embedded into the word embeddings. Domain Hotel Restaurant fake 802 8368 non-fake 4876 50149 %fake 14.1% 14.3% #reviews 5678 58517 #reviewers 5124 35593 Table 3: Yelp Labeled Dataset Statistics. Dataset Train Test date range 2004.10.23 2012.01.01 2012.01.01 2012.09.26 #reviews 1132 422 (a) Hotel Dataset Train Test date range 2004.10.12 2012.01.01 2012.01.01 2012.10.02 #reviews 14012 2368 (b) Restaurant Table 4: The Balanced Datasets Statistics for Training and Testing the Classifier from Table 3. In addition, the rating usually represents the sentiment polarity of a review, e.g., five star means ‘like’ and one star means ‘dislike’. The spammers often review their target products with a low rating for discredited purpose, and with a high rating for promoted purpose. To encode the semantics of the sentiment polarity into the review embeddings, we learn the embeddings of 1-5 stars rating in our model at the same time. They are taken as the constraints of the review embeddings during the joint learning. They are calculated as: C = ∑ (τ,γ)∈Γ ∑ (τ,γ′)∈Γ′ max{0, 1 + g(τ, γ) −g(τ, γ′)} (4) The set of corrupted tuples Γ′ is composed of training tuples Γ with the rating of review replaced by its opposite rating (i.e., 1 by 5, 2 by 4, 3 by 1 or 5). g(τ, γ) = ∥τ −γ∥2 2, norm constraints: ∥γ∥2 2 = 1. The final joint loss function is as follows: LJ = (1 −θ)L + θC (5) where θ is a hyper-parameter. 371 Features P R F1 A LF 54.5 71.1 61.7 55.9 1 LF+BF 63.4 52.6 57.5 61.1 2 BF EditSim+LF 55.3 69.7 61.6 56.6 3 BF W2Vsim+W2V 58.4 65.9 61.9 59.5 4 Ours RE 62.1 68.3 65.1 63.3 5 Ours RE+RRE+PRE 63.6 71.2 67.2 65.3 6 (a) Hotel P R F1 A 53.8 80.8 64.6 55.8 1 58.1 61.2 59.6 58.5 2 53.9 82.2 65.1 56.0 3 56.3 73.4 63.7 58.2 4 58.4 75.1 65.7 60.8 5 59.0 78.8 67.5 62.0 6 (b) Restaurant Table 5: SVM classification results across linguistic features (LF, bigrams here (Mukherjee et al., 2013b)), behavioral features (BF: RL, RD, MCS (Mukherjee et al., 2013b)); the SVM classification results by the intuitive method that finding the most similar existing review by edit distance ratio and take the found reviewers’ behavioral features as approximation (BF EditSim+LF), and results by the intuitive method that finding the most similar existing review by averaged pre-trained word embeddings (using Word2Vec) (BF W2Vsim+W2V); and the SVM classification results across the learnt review embeddings (RE), the learnt review’s rating embeddings (RRE), the learnt product’s average rating embeddings (PRE) by our model. Improvements of our model are statistically significant with p<0.005 based on paired t-test. 5 Experiments 5.1 Datasets and Evaluation Metrics Datasets: To evaluate the proposed method, we conducted experiments on Yelp dataset that was used in (Mukherjee et al., 2013b,c; Rayana and Akoglu, 2015). The statistics of the Yelp dataset are listed in Table 2 and Table 3. The reviewed product here refers to a hotel or restaurant. We take the existing reviews posted before January 1, 2012 as the datasets for training our embedding learning model, and take the first new reviews which just posted by the new reviewers after January 1, 2012 as the test datasets. Table 4 displays the statistics of the balanced datasets for training and testing the classifier. Evaluation Metrics: We select precision (P), recall (R), F1-Score (F1), accuracy (A) as metrics. 5.2 Our Model v.s. the Traditional Features To illustrate the effectiveness of our model, we conduct experiments on the public datasets, and make comparison with the most effective traditional linguistic features, e.g., bigrams, and the three practicable traditional behavioral features (RL, RD, MCS (Mukherjee et al., 2013b)) referred in Section 3.2. The results are shown in Table 5. For our model, we set the dimension of embeddings to 100, the number of CNN filters to 100, θ to 0.1, Z to 2. The hyper-parameters are tuned by grid search on the development dataset. The product and reviewer embeddings are randomly initialized from a uniform distribution (Socher et al., 2013). The word embeddings are initialized with 100-dimensions vectors pre-trained by the CBOW model (Word2Vec) (Mikolov et al., 2013). As Table 5 showed, our model observably performs better in detecting review spam for the cold-start task in both hotel and restaurant domains. Review Embeddings Compared with the traditional linguistic features, e.g., bigrams, using the review embeddings learnt by our model, results in around 3.4% improvement in F1 and around 7.4% improvement in A in the hotel domain (1.1% in F1 and 5.0% in A for the restaurant domain, shown in Tabel 5 (a,b) rows 1, 5). Compared with the combination of the bigrams and the traditional behavioral features, using the review embeddings learnt by our model, results in around 7.6% improvement in F1 and around 2.2% improvement in A in the hotel domain (6.1% in F1 and 2.3% in A for the restaurant domain, shown in Tabel 5 (a,b) rows 2, 5). The F1-Score (F1) of the classification under the balance distribution reflects the ability to detect the review spam. The accuracy (A) of the classification under the balance distribution reflects the ability to identify both the review spam and the real review. The experiment results indicate that our model performs significantly better than the traditional methods in F1 and A at the same time. The learnt review embeddings with encoded linguistic and behavioral information are more effective in detecting review 372 Features P R F1 A LF 54.5 71.1 61.7 55.9 1 Ours CNN 61.2 51.7 56.1 59.5 2 Ours RE 62.1 68.3 65.1 63.3 3 (a) Hotel P R F1 A 53.8 80.8 64.6 55.8 1 56.9 58.8 57.8 57.1 2 58.4 75.1 65.7 60.8 3 (b) Restaurant Table 6: SVM classification results across linguistic features (LF, bigrams here (Mukherjee et al., 2013b)), the learnt review embeddings (RE) ; and the classification results by only using our CNN. Both training and testing use balanced data (50:50). Improvements of our model are statistically significant with p<0.005 based on paired t-test. spam for the cold-start task. Rating Embeddings As we referred in Section 4.3, the rating of a review usually means the sentiment polarity of a real reviewer or the motivation of a spammer. As shown in Table 5 (a,b) rows 6, adding the rating embeddings of the products (hotel/restaurant) and reviews renders even higher F1 and A. We suppose that different rating embeddings are encoded with different semantic meanings. They reflect the semantic divergences between the average rating of the product and the review rating. In results, using RE+RRE+PRE which makes the best performance of our model, results in around 5.5% improvement in F1 and around 9.4% improvement in A in the hotel domain (2.9% in F1 and 6.2% in A for the restaurant domain, shown in Tabel 5 (a,b) rows 1, 6), compared with the LF. Using RE+RRE+PRE results in around 9.7% improvement in F1 and around 4.2% improvement in A in the hotel domain (7.9% in F1 and 3.5% in A for the restaurant domain, shown in Tabel 5 (a,b) rows 2, 6), compared with the LF+BF. The experiment results prove that our model is effective. The improvements in both the F1 and A prove that our model performs well in both detecting the review spam and identifying the real review. Furthermore, the improvements in both the hotel and restaurant domains prove that our model possesses preferable domain-adaptability 2. It can learn to represent the reviews with global linguistic and behavioral information from largescale unlabeled existing reviews. 2The improvements in hotel domain are greater than that in restaurant domain. The possible reason is the proportion of the available training data in hotel domain is higher than that in restaurant domain (99.01% vs. 97.40% in Table 2). 5.3 Our Jointly Embeddings v.s. the Intuitive Methods As mentioned in Section 1, to approximate the behavioral information of the new reviewers, there are other intuitive methods. So we conduct experiments with two intuitive methods as a comparison. One is finding the most similar existing review by edit distance ratio and taking the found reviewers’ behavioral features as an approximation, and then training the classifier on the behavioral features and bigrams (BF EditSim+LF). The other is finding the most similar existing review by cosine similarity of review embeddings which is the average of the pre-trained word embeddings (using Word2Vec), and then training the classifier on the behavioral features and review embeddings (BF W2Vsim+W2V). As shown in Table 5, our joint embeddings (Ours RE and Ours RE+RRE+PRE) obviously perform better than the intuitive methods, such as the Ours RE is 3.8% (Accuracy) and 3.2% (F1) better than BF W2Vsim+W2V in the hotel domain. The experiments indicate that our joint embeddings do a better job in capturing the reviewer’s characteristics and modeling the correlation of textual and behavioral information. 5.4 The Effectiveness of Encoding the Global Behavioral Information To further evaluate the effectiveness of encoding the global behavioral information in our model, we build an independent supervised convolutional neural network which has the same structure and parameter settings with the CNN part of our model. There is not any review graphic or behavioral information in this independent supervised CNN (Tabel 6 (a,b) row 2). As shown in Tabel 6 (a,b) rows 2, 3, compared with the review embeddings learnt by the independent supervised CNN, using 373 the review embeddings learnt by our model results in around 9.0% improvement in F1 and around 3.8% improvement in A in the hotel domain (7.9% in F1 and 3.7% in A for the restaurant domain). The results show that our model can represent the new reviews posted by the new reviewers with the correlated behavioral information encoded in the word embeddings. The transE part of our model has effectively recorded the behavioral information of the review graph. Thus, our model is more effective by jointly embedding the textual and behavioral informations, it helps to augment the possible behavioral information of the new reviewer. 5.5 The Effectiveness of CNN Compared with the the most effective linguistic features, e.g., bigrams, our independent supervised convolutional neural network performs better in A than F1 (shown in Tabel 5 (a,b) rows 1, 2). It indicates that the CNN do a better job in identifying the real review than the review spam. We suppose that the possible reason is that the CNN is good at modeling the different semantic aspects of a review. And the real reviewers usually tend to describe different aspects of a hotel or restaurant according to their real personal experiences, but the spammers can only forge fake reviews with their own infinite imagination. Mukherjee et al. (2013b) also proved that different psychological states of the minds of the spammers and non-spammers, lead to significant linguistic differences between review spam and non-spam. 6 Conclusion and Future Work This paper analyzes the importance and difficulty of the cold-start challenge in review spam combat. We propose a neural network model that jointly embeds the existing textual and behavioral information for detecting review spam in the coldstart task. It can learn to represent the new review of the new reviewer with the similar textual information and the correlated behavioral information in an unsupervised way. Then, a classifier is applied to detect the review spam. Experimental results prove the proposed model achieves an effective performance and possesses preferable domain-adaptability. It is also applicable to a large-scale dataset in an unsupervised way. To our best knowledge, this is the first work to handle the cold-start problem in review spam detection. We are going to explore more effective models in future. Acknowledgments This work was supported by the Natural Science Foundation of China (No. 61533018) and the National Basic Research Program of China (No. 2014CB340503). And this research work was also supported by Google through focused research awards program. We would like to thank Prof. Bing Liu for useful advice, and the anonymous reviewers for their detailed comments and suggestions. References Leman Akoglu, Rishi Chandy, and Christos Faloutsos. 2013. Opinion fraud detection in online reviews by network effects. ICWSM 13:2–11. Michael Anderson and Jeremy Magruder. 2012. Learning from the crowd: Regression discontinuity estimates of the effects of an online review database*. The Economic Journal 122(563):957–989. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems. pages 2787–2795. Leticia Cagnina and Paolo Rosso. 2015. Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, Association for Computational Linguistics, chapter Classification of deceptive opinions using a low dimensionality representation, pages 58– 66. https://doi.org/10.18653/v1/W15-2909. Geli Fei, Arjun Mukherjee, Bing Liu, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013. Exploiting burstiness in reviews for review spammer detection. In ICWSM. Citeseer. Song Feng, Ritwik Banerjee, and Yejin Choi. 2012a. Syntactic stylometry for deception detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 171–175. http://aclweb.org/anthology/P12-2034. Song Feng, Longfei Xing, Anupam Gogar, and Yejin Choi. 2012b. Distributional footprints of deceptive product reviews. In ICWSM. Tommaso Fornaciari and Massimo Poesio. 2014. Identifying fake amazon reviews as learning from crowds. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 279–287. https://doi.org/10.3115/v1/E14-1030. 374 Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 318–327. https://doi.org/10.18653/v1/D15-1038. Zhen Hai, Peilin Zhao, Peng Cheng, Peng Yang, Xiao-Li Li, and Guangxia Li. 2016. Deceptive review spam detection via exploiting task relatedness and unlabeled data. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1817–1826. http://aclweb.org/anthology/D16-1187. C Harris. 2012. Detecting deceptive opinion spam using human computation. In Workshops at AAAI on Artificial Intelligence. Dirk Hovy. 2016. The enemy in your own camp: How well can we detect statisticallygenerated fake reviews–an adversarial study. In The 54th Annual Meeting of the Association for Computational Linguistics. page 351. https://www.aclweb.org/anthology/385. Nitin Jindal and Bing Liu. 2008. Opinion spam and analysis. In Proceedings of the First WSDM. ACM, pages 219–230. Nitin Jindal, Bing Liu, and Ee-Peng Lim. 2010. Finding unusual review patterns using unexpected rules. In Proceedings of the 19th CIKM. ACM, pages 1549–1552. Santosh KC and Arjun Mukherjee. 2016. On the temporal dynamics of opinion spamming: Case studies on yelp. In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, pages 369–379. Seongsoon Kim, Hyeokyoon Chang, Seongwoon Lee, Minhwan Yu, and Jaewoo Kang. 2015. Deep semantic frame-based deceptive opinion spam analysis. In Proceedings of the 24th CIKM. ACM, pages 1131– 1140. Fangtao Li, Minlie Huang, Yi Yang, and Xiaoyan Zhu. 2011. Learning to identify review spam. In IJCAI Proceedings. volume 22, page 2488. Huayi Li, Zhiyuan Chen, Arjun Mukherjee, Bing Liu, and Jidong Shao. 2015. Analyzing and detecting opinion spam on a large-scale dataset via temporal and spatial patterns. In Ninth International AAAI Conference on Web and Social Media. Huayi Li, Bing Liu, Arjun Mukherjee, and Jidong Shao. 2014a. Spotting fake reviews using positive-unlabeled learning. Computaci´on y Sistemas 18(3):467–475. Jiwei Li, Claire Cardie, and Sujian Li. 2013. Topicspam: a topic-model based approach for spam detection. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 217–221. http://aclweb.org/anthology/P13-2039. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155 . Jiwei Li, Myle Ott, Claire Cardie, and Eduard Hovy. 2014b. Towards a general rule for identifying deceptive opinion spam. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1566– 1576. https://doi.org/10.3115/v1/P14-1147. Ee-Peng Lim, Viet-An Nguyen, Nitin Jindal, Bing Liu, and Hady Wirawan Lauw. 2010. Detecting product review spammers using rating behaviors. In Proceedings of the 19th CIKM. ACM, pages 939–948. Bing Liu. 2015. Sentiment Analysis: Mining Opinions, Sentiments, and Emotions. Cambridge University Press. Michael Luca. 2011. Reviews, reputation, and revenue: The case of yelp. com. Com (September 16, 2011). Harvard Business School NOM Unit Working Paper (12-016). Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, Curran Associates, Inc., pages 3111–3119. http://papers.nips.cc/paper/5021distributed-representations-of-words-and-phrasesand-their-compositionality.pdf. Arjun Mukherjee, Abhinav Kumar, Bing Liu, Junhui Wang, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013a. Spotting opinion spammers using behavioral footprints. In Proceedings of the 19th ACM SIGKDD. ACM, pages 632–640. Arjun Mukherjee and Bing Liu. 2010. Improving gender classification of blog authors. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 207–217. http://aclweb.org/anthology/D10-1021. Arjun Mukherjee, Bing Liu, and Natalie Glance. 2012. Spotting fake reviewer groups in consumer reviews. In Proceedings of the 21st WWW. ACM, pages 191– 200. 375 Arjun Mukherjee, Vivek Venkataraman, Bing Liu, and Natalie Glance. 2013b. Fake review detection: Classification and analysis of real and pseudo reviews. Technical report, Technical Report UIC-CS-201303, University of Illinois at Chicago. Arjun Mukherjee, Vivek Venkataraman, Bing Liu, and Natalie S Glance. 2013c. What yelp fake review filter might be doing? In ICWSM. Matthew L Newman, James W Pennebaker, Diane S Berry, and Jane M Richards. 2003. Lying words: Predicting deception from linguistic styles. Personality and social psychology bulletin 29(5):665–675. Myle Ott, Yejin Choi, Claire Cardie, and T. Jeffrey Hancock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 309–319. http://aclweb.org/anthology/P111032. JW Pennebaker, CK Chung, M Ireland, A Gonzales, and RJ Booth. 2007. The development and psychometric properties of liwc2007. austin, tx. Shebuti Rayana and Leman Akoglu. 2015. Collective opinion spam detection: Bridging review networks and metadata. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, pages 985–994. Yafeng Ren and Yue Zhang. 2016. Deceptive opinion spam detection using neural network. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 140–150. http://aclweb.org/anthology/C161014. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D. Christopher Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1631–1642. http://aclweb.org/anthology/D13-1170. Marco Viviani and Gabriella Pasi. 2017. Quantifier guided aggregation for the veracity assessment of online reviews. International Journal of Intelligent Systems 32(5):481–501. Guan Wang, Sihong Xie, Bing Liu, and Philip S Yu. 2011. Review graph based online store review spammer detection. In Proceedings of the 11th ICDM. IEEE, pages 1242–1247. Xuepeng Wang, Kang Liu, Shizhu He, and Jun Zhao. 2016. Learning to represent review with tensor decomposition for spam detection. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 866–875. http://aclweb.org/anthology/D16-1083. Sihong Xie, Guan Wang, Shuyang Lin, and Philip S Yu. 2012. Review spam detection via temporal pattern discovery. In Proceedings of the 18th KDD. ACM, pages 823–831. Qiongkai Xu and Hai Zhao. 2012. Using deep linguistic features for finding deceptive opinion spam. In Proceedings of COLING 2012: Posters. The COLING 2012 Organizing Committee, pages 1341– 1350. http://aclweb.org/anthology/C12-2131. 376
2017
34
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 377–387 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1035 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 377–387 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1035 Learning Cognitive Features from Gaze Data for Sentiment and Sarcasm Classification using Convolutional Neural Network Abhijit Mishra†, Kuntal Dey†, Pushpak Bhattacharyya⋆ †IBM Research, India ⋆Indian Institute of Technology Bombay, India †{abhijimi, kuntadey}@in.ibm.com ⋆[email protected] Abstract Cognitive NLP systems- i.e., NLP systems that make use of behavioral data - augment traditional text-based features with cognitive features extracted from eye-movement patterns, EEG signals, brain-imaging etc.. Such extraction of features is typically manual. We contend that manual extraction of features may not be the best way to tackle text subtleties that characteristically prevail in complex classification tasks like sentiment analysis and sarcasm detection, and that even the extraction and choice of features should be delegated to the learning system. We introduce a framework to automatically extract cognitive features from the eye-movement / gaze data of human readers reading the text and use them as features along with textual features for the tasks of sentiment polarity and sarcasm detection. Our proposed framework is based on Convolutional Neural Network (CNN). The CNN learns features from both gaze and text and uses them to classify the input text. We test our technique on published sentiment and sarcasm labeled datasets, enriched with gaze information, to show that using a combination of automatically learned text and gaze features often yields better classification performance over (i) CNN based systems that rely on text input alone and (ii) existing systems that rely on handcrafted gaze and textual features. 1 Introduction Detection of sentiment and sarcasm in usergenerated short reviews is of primary importance for social media analysis, recommendation and dialog systems. Traditional sentiment analyzers and sarcasm detectors face challenges that arise at lexical, syntactic, semantic and pragmatic levels (Liu and Zhang, 2012; Mishra et al., 2016c). Featurebased systems (Akkaya et al., 2009; Sharma and Bhattacharyya, 2013; Poria et al., 2014) can aptly handle lexical and syntactic challenges (e.g. learning that the word deadly conveys a strong positive sentiment in opinions such as Shane Warne is a deadly bowler, as opposed to The high altitude Himalayan roads have deadly turns). It is, however, extremely difficult to tackle subtleties at semantic and pragmatic levels. For example, the sentence I really love my job. I work 40 hours a week to be this poor. requires an NLP system to be able to understand that the opinion holder has not expressed a positive sentiment towards her / his job. In the absence of explicit clues in the text, it is difficult for automatic systems to arrive at a correct classification decision, as they often lack external knowledge about various aspects of the text being classified. Mishra et al. (2016b) and Mishra et al. (2016c) show that NLP systems based on cognitive data (or simply, Cognitive NLP systems) , that leverage eye-movement information obtained from human readers, can tackle the semantic and pragmatic challenges better. The hypothesis here is that human gaze activities are related to the cognitive processes in the brain that combine the “external knowledge” that the reader possesses with textual clues that she / he perceives. While incorporating behavioral information obtained from gaze-data in NLP systems is intriguing and quite plausible, especially due to the availability of low cost eye-tracking machinery (Wood and Bulling, 2014; Yamamoto et al., 2013), few methods exist for text classification, and they rely on handcrafted features extracted from gaze data (Mishra et al., 2016b,c). These systems have limited capabilities due to two reasons: (a) Manually designed gaze based features may not adequately 377 capture all forms of textual subtleties (b) Eyemovement data is not as intuitive to analyze as text which makes the task of designing manual features more difficult. So, in this work, instead of handcrafting the gaze based and textual features, we try to learn feature representations from both gaze and textual data using Convolutional Neural Network (CNN). We test our technique on two publicly available datasets enriched with eyemovement information, used for binary classification tasks of sentiment polarity and sarcasm detection. Our experiments show that the automatically extracted features often help to achieve significant classification performance improvement over (a) existing systems that rely on handcrafted gaze and textual features and (b) CNN based systems that rely on text input alone. The datasets used in our experiments, resources and other relevant pointers are available at http://www.cfilt.iitb.ac.in/ cognitive-nlp The rest of the paper is organized as follows. Section 2 discusses the motivation behind using readers’ eye-movement data in a text classification setting. In Section 3, we argue why CNN is preferred over other available alternatives for feature extraction. The CNN architecture is proposed and discussed in Section 4. Section 5 describes our experimental setup and results are discussed in Section 6. We provide a detailed analysis of the results along with some insightful observations in Section 7. Section 8 points to relevant literature followed by Section 9 that concludes the paper. Terminology A fixation is a relatively long stay of gaze on a visual object (such as words in text) where as a sacccade corresponds to quick shifting of gaze between two positions of rest. Forward and backward saccades are called progressions and regressions respectively. A scanpath is a line graph that contains fixations as nodes and saccades as edges. 2 Eye-movement and Linguistic Subtleties Presence of linguistic subtleties often induces (a) surprisal (Kutas and Hillyard, 1980; Malsburg et al., 2015), due to the underlying disparity /context incongruity or (b) higher cognitive load (Rayner and Duffy, 1986), due to the presence of lexically and syntactically complex structures. While surprisal accounts for irregular saccades (Malsburg et al., 2015), higher cognitive Word ID Time ( miliseconds) P1 P2 P3 S2: The lead actress is terrible and I cannot be convinced she is supposed to be some forensic genius. S1: I'll always cherish the original misconception I had of you.. Figure 1: Scanpaths of three participants for two sentences (Mishra et al., 2016b). Sentence S1 is sarcastic but S2 is not. Length of the straight lines represents saccade distance and size of the circles represents fixation duration load results in longer fixation duration (Kliegl et al., 2004). Mishra et al. (2016b) find that presence of sarcasm in text triggers either irregular saccadic patterns or unusually high duration fixations than non-sarcastic texts (illustrated through example scanpath representations in Figure 1). For sentiment bearing texts, highly subtle eyemovement patterns are observed for semantically/pragmatically complex negative opinions (expressing irony, sarcasm, thwarted expectations, etc.) than the simple ones (Mishra et al., 2016b). The association between linguistic subtleties and eye-movement patterns could be captured through sophisticated feature engineering that considers both gaze and text inputs. In our work, CNN takes the onus of feature engineering. 3 Why Convolutional Neural Network? CNNs have been quite effective in learning filters for image processing tasks, filters being used to transform the input image into more informative feature space (Krizhevsky et al., 2012). Filters learned at various CNN layers are quite similar to handcrafted filters used for detection of edges, contours, and removal of redundant backgrounds. We believe, a similar technique can also be applied to eye-movement data, where the learned filters will, hopefully, extract informative cognitive features. For instance, for sarcasm, we expect the network to learn filters that detect long distance saccades (refer to Figure 2 for an analogical il378 Figure 2: Illustrative analogy between CNN applied to images and scanpath representations showing why CNN can be useful for learning features from gaze patterns. Images partially taken from Taigman et al. (2014) lustration). With more number of convolution filters of different dimensions, the network may extract multiple features related to different gaze attributes (such as fixations, progressions, regressions and skips) and will be free from any form of human bias that manually extracted features are susceptible to. 4 Learning Feature Representations: The CNN Architecture Figure 3 shows the CNN architecture with two components for processing and extracting features from text and gaze inputs. The components are explained below. 4.1 Text Component The text component is quite similar to the one proposed by Kim (2014) for sentence classification. Words (in the form of one-hot representation) in the input text are first replaced by their embeddings of dimension K (ith word in the sentence represented by an embedding vector xi ∈RK). As per Kim (2014), a multi-channel variant of CNN (referred to as MULTICHANNELTEXT) can be implemented by using two channels of embeddingsone that remains static throughout training (referred to as STATICTEXT), and the other one that gets updated during training (referred to as NONSTATICTEXT). We separately experiment with static, non-static and multi-channel variants. For each possible input channel of the text component, a given text is transformed into a tensor of fixed length N (padded with zero-tensors wherever necessary to tackle length variations) by concatenating the word embeddings. x1:N = x1 ⊕x2 ⊕x3 ⊕... ⊕xN (1) where ⊕is the concatenation operator. To extract local features1, convolution operation is applied. Convolution operation involves a filter, W ∈RHK, which is convolved with a window of H embeddings to produce a local feature for the H words. A local feature, ci is generated from a window of embeddings xi:i+H−1 by applying a non linear function (such as a hyperbolic tangent) over the convoluted output. Mathematically, ci = f(W.xi:i+H−1 + b) (2) where b ∈R is the bias and f is the non-linear function. This operation is applied to each possible window of H words to produce a feature map (c) for the window size H. c = [c1, c2, c3, ..., cN−H+1] (3) A global feature is then obtained by applying max pooling operation2 (Collobert et al., 2011) over the feature map. The idea behind max-pooling is to capture the most important feature - one with the highest value - for each feature map. We have described the process by which one feature is extracted from one filter (red bordered portions in Figure 3 illustrate the case of H = 2). The model uses multiple filters for each filter size to obtain multiple features representing the text. In the MULTICHANNELTEXT variant, for a window of H words, the convolution operation is separately applied on both the embedding channels. Local features learned from both the channels are concatenated before applying max-pooling. 4.2 Gaze Component The gaze component deals with scanpaths of multiple participants annotating the same text. Scanpaths can be pre-processed to extract two sequences3 of gaze data to form separate channels of input: (1) A sequence of normalized4 durations of fixations (in milliseconds) in the order in which 1features specific to a region in case of images or window of words in case of text 2mean pooling does not perform well. 3like text-input, gaze sequences are padded where necessary 4scaled across participants using min-max normalization to reduce subjectivity 379 Text Component Non-static Static Saccade Fixation Gaze Component P1 P2 P3 P4 P5 P6 P7 P8 N×K representation of sentences with static and non static channels P×G representation of sentences with fixation and saccade channels 1-D convolution operation with multiple filter width and feature maps 2-D convolution operation with multiple filter row and Column widths Max-pooling for each filter width Max-pooling over multiple dimensions for multiple filter widths Fully connected with dropouts and softmax output Merged pooled values Figure 3: Deep convolutional model for feature extraction from both text and gaze inputs they appear in the scanpath, and (2) A sequence of position of fixations (in terms of word id) in the order in which they appear in the scanpath. These channels are related to two fundamental gaze attributes such as fixation and saccade respectively. With two channels, we thus have three possible configurations of the gaze component such as (i) FIXATION, where the input is normalized fixation duration sequence, (ii) SACCADE, where the input is fixation position sequence, and (iii) MULTICHANNELGAZE, where both the inputs channels are considered. For each possible input channel, the input is in the form of a P × G matrix (with P →number of participants and G →length of the input sequence). Each element of the matrix gij ∈R, with i ∈P and j ∈G, corresponds to the jth gaze attribute (either fixation duration or word id, depending on the channel) of the input sequence of the ith participant. Now, unlike the text component, here we apply convolution operation across two dimensions i.e. choosing a two dimensional convolution filter W ∈RJK (for simplicity, we have kept J = K, thus , making the dimension of W, J2). For the dimension size of J2, a local feature cij is computed from the window of gaze elements gij:(i+J−1)(j+J−1) by, cij = f(W.gij:(i+J−1)(j+J−1) + b) (4) where b ∈R is the bias and f is a non-linear function. This operation is applied to each possible window of size J2 to produce a feature map (c), c =[c11, c12, c13, ..., c1(G−J+1), c21, c22, c23, ..., c2(G−J+1), ..., c(P −J+1)1, c(P −J+1)2, ..., c(P −J+1)(G−J+1)] (5) A global feature is then obtained by applying max pooling operation. Unlike the text component, max-pooling operator is applied to a 2D window of local features size M × N (for simplicity, we set M = N, denoted henceforth as M2). For the window of size M2, the pooling operation on c will result in as set of global features ˆcJ = max{cij:(i+M−1)(j+M−1)} for each possible i, j. We have described the process by which one feature is extracted from one filter (of 2D window size J2 and the max-pooling window size of M2). In Figure 3, red and blue bordered portions illustrate the cases of J2 = [3, 3] and M2 = [2, 2] respectively. Like the text component, the gaze component also uses multiple filters for each filter size to obtain multiple features representing the gaze input. In the MULTICHANNELGAZE variant, for a 2D window of J2, the convolution operation is separately applied on both fixation duration and saccade channels and local features learned from both the channels are concatenated before maxpooling is applied. Once the global features are learned from both the text and gaze components, they are merged 380 and passed to a fully connected feed forward layer (with number of units set to 150) followed by a SoftMax layer that outputs the the probabilistic distribution over the class labels. The gaze component of our network is not invariant of the order in which the scanpath data is given as input- i.e., the P rows in the P × G can not be shuffled, even if each row is independent from others. The only way we can think of for addressing this issue is by applying convolution operations to all P × G matrices formed with all the permutations of P, capturing every possible ordering. Unfortunately, this makes the training process significantly less scalable, as the number of model parameters to be learned becomes huge. As of now, training and testing are carried out by keeping the order of the input constant. 5 Experiment Setup We now share several details regarding our experiments below. 5.1 Dataset We conduct experiments for two binaryclassification tasks of sentiment and sarcasm using two publicly available datasets enriched with eye-movement information. Dataset 1 has been released by Mishra et al. (2016a). It contains 994 text snippets with 383 positive and 611 negative examples. Out of the 994 snippets, 350 are sarcastic. Dataset 2 has been used by Joshi et al. (2014) and it consists of 843 snippets comprising movie reviews and normalized tweets out of which 443 are positive, and 400 are negative. Eye-movement data of 7 and 5 readers is available for each snippet for dataset 1 and 2 respectively. 5.2 CNN Variants With text component alone we have three variants such as STATICTEXT, NONSTATICTEXT and MULTICHANNELTEXT (refer to Section 4.1). Similarly, with gaze component we have variants such as FIXATION, SACCADE and MULTICHANNELGAZE (refer to Section 4.2). With both text and gaze components, 9 more variants could thus beexperimented with. 5.3 Hyper-parameters For text component, we experiment with filter widths (H) of [3, 4]. For the gaze component, 2D filters (J2) set to [3 × 3], [4 × 4] respectively. The max pooling 2D window, M2, is set to [2 × 2]. In both gaze and text components, number of filters is set to 150, resulting in 150 feature maps for each window. These model hyper-parameters are fixed by trial and error and are possibly good enough to provide a first level insight into our system. Tuning of hyper-parameters might help in improving the performance of our framework, which is on our future research agenda. 5.4 Regularization For regularization dropout is employed both on the embedding and the penultimate layers with a constraint on l2-norms of the weight vectors (Hinton et al., 2012). Dropout prevents co-adaptation of hidden units by randomly dropping out - i.e., setting to zero - a proportion p of the hidden units during forward propagation. We set p to 0.25. 5.5 Training We use ADADELTA optimizer (Zeiler, 2012), with a learning rate of 0.1. The input batch size is set to 32 and number of training iterations (epochs) is set to 200. 10% of the training data is used for validation. 5.6 Use of Pre-trained Embeddings: Initializing the embedding layer with of pretrained embeddings can be more effective than random initialization (Kim, 2014). In our experiments, we have used embeddings learned using the movie reviews with one sentence per review dataset (Pang and Lee, 2005). It is worth noting that, for a small dataset like ours, using a small data-set like the one from (Pang and Lee, 2005) helps in reducing the number model parameters resulting in faster learning of embeddings. The results are also quite close to the ones obtained using word2vec facilitated by Mikolov et al. (2013). 5.7 Comparison with Existing Work For sentiment analysis, we compare our systems’s accuracy (for both datasets 1 and 2) with Mishra et al. (2016c)’s systems that rely on handcrafted text and gaze features. For sarcasm detection, we compare Mishra et al. (2016b)’s sarcasm classifier with ours using dataset 1 (with available gold standard labels for sarcasm). We follow the same 10-fold train-test configuration as these existing works for consistency. 381 Dataset1 Dataset2 Configuration P R F P R F Traditional systems based on N¨aive Bayes 63.0 59.4 61.14 50.7 50.1 50.39 Multi-layered Perceptron 69.0 69.2 69.2 66.8 66.8 66.8 textual features SVM (Linear Kernel) 72.8 73.2 72.6 70.3 70.3 70.3 Systems by Mishra et al. (2016c) Gaze based (Best) 61.8 58.4 60.05 53.6 54.0 53.3 Text + Gaze (Best) 73.3 73.6 73.5 71.9 71.8 71.8 CNN with only text input (Kim, 2014) STATICTEXT 63.85 61.26 62.22 55.46 55.02 55.24 NONSTATICTEXT 72.78 71.93 72.35 60.51 59.79 60.14 MULTICHANNELTEXT 72.17 70.91 71.53 60.51 59.66 60.08 CNN with only gaze Input FIXATION 60.79 58.34 59.54 53.95 50.29 52.06 SACCADE 64.19 60.56 62.32 51.6 50.65 51.12 MULTICHANNELGAZE 65.2 60.35 62.68 52.52 51.49 52 CNN with both text and gaze Input STATICTEXT + FIXATION 61.52 60.86 61.19 54.61 54.32 54.46 STATICTEXT + SACCADE 65.99 63.49 64.71 58.39 56.09 57.21 STATICTEXT + MULTICHANNELGAZE 65.79 62.89 64.31 58.19 55.39 56.75 NONSTATICTEXT + FIXATION 73.01 70.81 71.9 61.45 59.78 60.60 NONSTATICTEXT + SACCADE 77.56 73.34 75.4 65.13 61.08 63.04 NONSTATICTEXT + MULTICHANNELGAZE 79.89 74.86 77.3 63.93 60.13 62 MULTICHANNELTEXT + FIXATION 74.44 72.31 73.36 60.72 58.47 59.57 MULTICHANNELTEXT + SACCADE 78.75 73.94 76.26 63.7 60.47 62.04 MULTICHANNELTEXT + MULTICHANNELGAZE 78.38 74.23 76.24 64.29 61.08 62.64 Table 1: Results for different traditional feature based systems and CNN model variants for the task of sentiment analysis. Abbreviations (P,R,F)→Precision, Recall, F-score. SVM→Support Vector Machine 6 Results In this section, we discuss the results for different model variants for sentiment polarity and sarcasm detection tasks. 6.1 Results for Sentiment Analysis Task Table 1 presents results for sentiment analysis task. For dataset 1, different variants of our CNN architecture outperform the best systems reported by Mishra et al. (2016c), with a maximum F-score improvement of 3.8%. This improvement is statistically significant of p < 0.05 as confirmed by McNemar test. Moreover, we observe an F-score improvement of around 5% for CNNs with both gaze and text components as compared to CNNs with only text components (similar to the system by Kim (2014)), which is also statistically significant (with p < 0.05). For dataset 2, CNN based approaches do not perform better than manual feature based approaches. However, variants with both text and gaze components outperform the ones with only text component (Kim, 2014), with a maximum Fscore improvement of 2.9%. We observe that for dataset 2, training accuracy reaches 100 within 25 epochs with validation accuracy stable around 50%, indicating the possibility of overfitting. Tuning the regularization parameters specific to dataset 2 may help here. Even though CNN might not be proving to be a choice as good as handcrafted features for dataset 2, the bottom line remains that incorporation of gaze data into CNN consistently improves the performance over onlytext-based CNN variants. 6.2 Results for Sarcasm Detection Task For sarcasm detection, our CNN model variants outperform traditional systems by a maximum margin of 11.27% (Table 2). However, the improvement by adding the gaze component to the CNN network is just 1.34%, which is statistically insignificant over CNN with text component. While inspecting the sarcasm dataset, we observe a clear difference between the vocabulary of sarcasm and non-sarcasm classes in our dataset. This, perhaps, was captured well by the text component, especially the variant with only non-static embeddings. 7 Discussion In this section, some important observations from our experiments are discussed. 7.1 Effect of Embedding Dimension Variation Embedding dimension has proven to have a deep impact on the performance of neural systems (dos Santos and Gatti, 2014; Collobert et al., 2011). 382 Configuration P R F Traditional systems based on N¨aive Bayes 69.1 60.1 60.5 Multi-layered Perceptron 69.7 70.4 69.9 textual features SVM (Linear Kernel) 72.1 71.9 72 Systems by Riloff et al. (2013) Text based (Ordered) 49 46 47 Text + Gaze (Unordered) 46 41 42 System by Joshi et al. (2015) Text based (best) 70.7 69.8 64.2 Systems by Mishra et al. (2016b) Gaze based (Best) 73 73.8 73.1 Text based (Best) 72.1 71.9 72 Text + Gaze (Best) 76.5 75.3 75.7 CNN with only text input (Kim, 2014) STATICTEXT 67.17 66.38 66.77 NONSTATICTEXT 84.19 87.03 85.59 MULTICHANNELTEXT 84.28 87.03 85.63 CNN with only gaze input FIXATION 74.39 69.62 71.93 SACCADE 68.58 68.23 68.40 MULTICHANNELGAZE 67.93 67.72 67.82 CNN with both text and gaze Input STATICTEXT + FIXATION 72.38 71.93 72.15 STATICTEXT + SACCADE 73.12 72.14 72.63 STATICTEXT + MULTICHANNELGAZE 71.41 71.03 71.22 NONSTATICTEXT + FIXATION 87.42 85.2 86.30 NONSTATICTEXT + SACCADE 84.84 82.68 83.75 NONSTATICTEXT + MULTICHANNELGAZE 84.98 82.79 83.87 MULTICHANNELTEXT + FIXATION 87.03 86.92 86.97 MULTICHANNELTEXT + SACCADE 81.98 81.08 81.53 MULTICHANNELTEXT + MULTICHANNELGAZE 83.11 81.69 82.39 Table 2: Results for different traditional feature based systems and CNN model variants for the task of sarcasm detection on dataset 1. Abbreviations (P,R,F)→Precision, Recall, F-score We repeated our experiments by varying the embedding dimensions in the range of [50-300]5 and observed that reducing embedding dimension improves the F-scores by a little margin. Small embedding dimensions are probably reducing the chances of over-fitting when the data size is small. We also observe that for different embedding dimensions, performance of CNN with both gaze and text components is consistently better than that with only text component. 7.2 Effect of Static / Non-static Text Channels Non-static embedding channel has a major role in tuning embeddings for sentiment analysis by bringing adjectives expressing similar sentiment close to each other (e.g, good and nice), where as static channel seems to prevent over-tuning of embeddings (over-tuning often brings verbs like love closer to the pronoun I in embedding space, purely due to higher co-occurrence of these two words in sarcastic examples). 7.3 Effect of Fixation / Saccade Channels For sentiment detection, saccade channel seems to be handing text having semantic incongruity (due 5a standard range (Liu et al., 2015; Melamud et al., 2016) to the presence of irony / sarcasm) better. Fixation channel does not help much, may be because of higher variance in fixation duration. For sarcasm detection, fixation and saccade channels perform with similar accuracy when employed separately. Accuracy reduces with gaze multichannel, may be because of higher variation of both fixations and saccades across sarcastic and nonsarcastic classes, as opposed to sentiment classes. 7.4 Effectiveness of the CNN-learned Features To examine how good the features learned by the CNN are, we analyzed the features for a few example cases. Figure 4 presents some of the example test cases for the task of sarcasm detection. Example 1 contains sarcasm while examples 2, 3 and 4 are non-sarcastic. To see if there is any difference in the automatically learned features between text-only and combined text and gaze variants, we examine the feature vector (of dimension 150) for the examples obtained from different model variants. Output of the hidden layer after merge layer is considered as features learned by the network. We plot the features, in the form of color-bars, following Li et al. (2016) - denser col383 1. I would like to live in Manchester, England. The transition between Manchester and death would be unnoticeable. (Sarcastic, Negative Sentiment) 2. We really did not like this camp. After a disappointing summer, we switched to another camp, and all of us much happier on all fronts! (Non Sarcastic, Negative Sentiment) 3. Helped me a lot with my panics attack I take 6 mg a day for almost 20 years can't stop of course but make me feel very comfortable (Non Sarcastic, Positive Sentiment) 4. Howard is the King and always will be, all others are weak clones. (Non Sarcastic, Positive Sentiment) (a) MultichannelText + MultichannelGaze (b) MultichannelText Figure 4: Visualization of representations learned by two variants of the network for sarcasm detection task. The output of the Merge layer (of dimension 150) are plotted in the form of colour-bars. Plots with thick red borders correspond to wrongly predicted examples. ors representing feature with higher magnitude. In Figure 4, we show only two representative model variants viz., MULTICHANNELTEXT and MULTICHANNELTEXT+ MULTICHANNELGAZE. As one can see, addition of gaze information helps to generate features with more subtle differences (marked by blue rectangular boxes) for sarcastic and non-sarcastic texts. It is also interesting to note that in the marked region, features for the sarcastic texts exhibit more intensity than the nonsarcastic ones - perhaps capturing the notion that sarcasm typically conveys an intensified negative opinion. This difference is not clear in feature vectors learned by text-only systems for instances like example 2, which has been incorrectly classified by MULTICHANNELTEXT. Example 4 is incorrectly classified by both the systems, perhaps due to lack of context. In cases like this, addition of gaze information does not help much in learning more distinctive features, as it becomes difficult for even humans to classify such texts. 8 Related Work Sentiment and sarcasm classification are two important problems in NLP and have been the focus of research for many communities for quite some time. Popular sentiment and sarcasm detection systems are feature based and are based on unigrams, bigrams etc. (Dave et al., 2003; Ng et al., 2006), syntactic properties (Martineau and Finin, 2009; Nakagawa et al., 2010), semantic properties (Balamurali et al., 2011). For sarcasm detection, supervised approaches rely on (a) Unigrams and Pragmatic features (Gonz´alez-Ib´anez et al., 2011; Barbieri et al., 2014; Joshi et al., 2015) (b) Stylistic patterns (Davidov et al., 2010) and patterns related to situational disparity (Riloff et al., 2013) and (c) Hastag interpretations (Liebrecht et al., 2013; Maynard and Greenwood, 2014). Recent systems are based on variants of deep neural network built on the top of embeddings. A few representative works in this direction for sentiment analysis are based on CNNs (dos Santos and Gatti, 2014; Kim, 2014; Tang et al., 2014), RNNs (Dong et al., 2014; Liu et al., 2015) and combined archi384 tecture (Wang et al., 2016). Few works exist on using deep neural networks for sarcasm detection, one of which is by (Ghosh and Veale, 2016) that uses a combination of RNNs and CNNs. Eye-tracking technology is a relatively new NLP, with very few systems directly making use of gaze data in prediction frameworks. Klerke et al. (2016) present a novel multi-task learning approach for sentence compression using labeled data, while, Barrett and Søgaard (2015) discriminate between grammatical functions using gaze features. The closest works to ours are by Mishra et al. (2016b) and Mishra et al. (2016c) that introduce feature engineering based on both gaze and text data for sentiment and sarcasm detection tasks. These recent advancements motivate us to explore the cognitive NLP paradigm. 9 Conclusion and Future Directions In this work, we proposed a multimodal ensemble of features, automatically learned using variants of CNNs from text and readers’ eye-movement data, for the tasks of sentiment and sarcasm classification. On multiple published datasets for which gaze information is available, our systems could often achieve significant performance improvements over (a) systems that rely on handcrafted gaze and textual features and (b) CNN based systems that rely on text input alone. An analysis of the learned features confirms that the combination of automatically learned features is indeed capable of representing deep linguistic subtleties in text that pose challenges to sentiment and sarcasm classifiers. Our future agenda includes: (a) optimizing the CNN framework hyper-parameters (e.g., filter width, dropout, embedding dimensions, etc.) to obtain better results, (b) exploring the applicability of our technique for documentlevel sentiment analysis and (c) applying our framework to related problems, such as emotion analysis, text summarization, and questionanswering, where considering textual clues alone may not prove to be sufficient. Acknowledgments We thank Anoop Kunchukuttan, Joe Cheri Ross, and Sachin Pawar, research scholars of the Center for Indian Language Technology (CFILT), IIT Bombay for their valuable inputs. References Cem Akkaya, Janyce Wiebe, and Rada Mihalcea. 2009. Subjectivity word sense disambiguation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1Volume 1. ACL, pages 190–199. AR Balamurali, Aditya Joshi, and Pushpak Bhattacharyya. 2011. Harnessing wordnet senses for supervised sentiment classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. pages 1081–1091. Francesco Barbieri, Horacio Saggion, and Francesco Ronzano. 2014. Modelling sarcasm in twitter, a novel approach. ACL 2014 page 50. Maria Barrett and Anders Søgaard. 2015. Using reading behavior to predict grammatical functions. In Proceedings of the Sixth Workshop on Cognitive Aspects of Computational Language Learning. Association for Computational Linguistics, Lisbon, Portugal, pages 1–5. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. Kushal Dave, Steve Lawrence, and David M Pennock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In Proceedings of the 12th international conference on World Wide Web. ACM, pages 519–528. Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, pages 107–116. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In ACL (2). pages 49–54. C´ıcero Nogueira dos Santos and Maira Gatti. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In Proceedings of COLING. Aniruddha Ghosh and Tony Veale. 2016. Fracking sarcasm using neural network. In Proceedings of NAACL-HLT. pages 161–169. Roberto Gonz´alez-Ib´anez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2. Association for Computational Linguistics, pages 581–586. 385 Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580 . Aditya Joshi, Abhijit Mishra, Nivvedan Senthamilselvan, and Pushpak Bhattacharyya. 2014. Measuring sentiment annotation complexity of text. In ACL (Daniel Marcu 22 June 2014 to 27 June 2014). ACL. Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015. Harnessing context incongruity for sarcasm detection. Proceedings of 53rd Annual Meeting of the Association for Computational Linguistics, Beijing, China page 757. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1746– 1751. Sigrid Klerke, Yoav Goldberg, and Anders Søgaard. 2016. Improving sentence compression by learning to predict gaze. arXiv preprint arXiv:1604.03357 . Reinhold Kliegl, Ellen Grabner, Martin Rolfs, and Ralf Engbert. 2004. Length, frequency, and predictability effects of words on eye movements in reading. European Journal of Cognitive Psychology 16(12):262–284. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. pages 1097–1105. Marta Kutas and Steven A Hillyard. 1980. Reading senseless sentences: Brain potentials reflect semantic incongruity. Science 207(4427):203–205. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in nlp. In Proceedings of NAACL-HLT. pages 681– 691. Christine Liebrecht, Florian Kunneman, and Antal van den Bosch. 2013. The perfect solution for detecting sarcasm in tweets# not. WASSA 2013 page 29. Bing Liu and Lei Zhang. 2012. A survey of opinion mining and sentiment analysis. In Mining text data, Springer, pages 415–463. Pengfei Liu, Shafiq R Joty, and Helen M Meng. 2015. Fine-grained opinion mining with recurrent neural networks and word embeddings. In EMNLP. pages 1433–1443. Titus Malsburg, Reinhold Kliegl, and Shravan Vasishth. 2015. Determinants of scanpath regularity in reading. Cognitive science 39(7):1675–1703. Justin Martineau and Tim Finin. 2009. Delta tfidf: An improved feature space for sentiment analysis. ICWSM 9:106. Diana Maynard and Mark A Greenwood. 2014. Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis. In Proceedings of LREC. Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In NAACL HLT 2016. pages 1030–1040. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In HLT-NAACL. volume 13, pages 746–751. Abhijit Mishra, Diptesh Kanojia, and Pushpak Bhattacharyya. 2016a. Predicting readers’ sarcasm understandability by modeling gaze behavior. In Proceedings of AAAI. Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, and Pushpak Bhattacharyya. 2016b. Harnessing cognitive features for sarcasm detection. ACL 2016 page 156. Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, and Pushpak Bhattacharyya. 2016c. Leveraging cognitive features for sentiment analysis. CoNLL 2016 page 156. Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using crfs with hidden variables. In NAACLHLT. Association for Computational Linguistics, pages 786–794. Vincent Ng, Sajib Dasgupta, and SM Arifin. 2006. Examining the role of linguistic knowledge sources in the automatic identification and classification of reviews. In Proceedings of the COLING/ACL on Main conference poster sessions. Association for Computational Linguistics, pages 611–618. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, pages 115–124. Soujanya Poria, Erik Cambria, Gregoire Winterstein, and Guang-Bin Huang. 2014. Sentic patterns: Dependency-based rules for concept-level sentiment analysis. Knowledge-Based Systems 69:45–63. Keith Rayner and Susan A Duffy. 1986. Lexical complexity and fixation times in reading: Effects of word frequency, verb complexity, and lexical ambiguity. Memory & Cognition 14(3):191–201. 386 Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of Empirical Methods in Natural Language Processing. pages 704–714. Raksha Sharma and Pushpak Bhattacharyya. 2013. Detecting domain dedicated polar words. In Proceedings of the International Joint Conference on Natural Language Processing. Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, and Lior Wolf. 2014. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 1701–1708. Duyu Tang, Furu Wei, Bing Qin, Ting Liu, and Ming Zhou. 2014. Coooolll: A deep learning system for twitter sentiment classification. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). pages 208–212. Jin Wang, Liang-Chih Yu, K. Robert Lai, and Xuejie Zhang. 2016. Dimensional sentiment analysis using a regional cnn-lstm model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 225–230. Erroll Wood and Andreas Bulling. 2014. Eyetab: Model-based gaze estimation on unmodified tablet computers. In Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, pages 207–210. Michiya Yamamoto, Hironobu Nakagawa, Koichi Egawa, and Takashi Nagamatsu. 2013. Development of a mobile tablet pc with gaze-tracking function. In Human Interface and the Management of Information. Information and Interaction for Health, Safety, Mobility and Complex Environments, Springer, pages 421–429. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . 387
2017
35
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 388–397 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1036 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 388–397 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1036 An Unsupervised Neural Attention Model for Aspect Extraction Ruidan He†‡, Wee Sun Lee†, Hwee Tou Ng†, and Daniel Dahlmeier‡ †Department of Computer Science, National University of Singapore ‡SAP Innovation Center Singapore †{ruidanhe,leews,nght}@comp.nus.edu.sg ‡[email protected] Abstract Aspect extraction is an important and challenging task in aspect-based sentiment analysis. Existing works tend to apply variants of topic models on this task. While fairly successful, these methods usually do not produce highly coherent aspects. In this paper, we present a novel neural approach with the aim of discovering coherent aspects. The model improves coherence by exploiting the distribution of word co-occurrences through the use of neural word embeddings. Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space. In addition, we use an attention mechanism to de-emphasize irrelevant words during training, further improving the coherence of aspects. Experimental results on real-life datasets demonstrate that our approach discovers more meaningful and coherent aspects, and substantially outperforms baseline methods on several evaluation tasks. 1 Introduction Aspect extraction is one of the key tasks in sentiment analysis. It aims to extract entity aspects on which opinions have been expressed (Hu and Liu, 2004; Liu, 2012). For example, in the sentence “The beef was tender and melted in my mouth”, the aspect term is “beef”. Two sub-tasks are performed in aspect extraction: (1) extracting all aspect terms (e.g., “beef”) from a review corpus, (2) clustering aspect terms with similar meaning into categories where each category represents a single aspect (e.g., cluster “beef”, “pork”, “pasta”, and “tomato” into one aspect food). Previous works for aspect extraction can be categorized into three approaches: rule-based, supervised, and unsupervised. Rule-based methods usually do not group extracted aspect terms into categories. Supervised learning requires data annotation and suffers from domain adaptation problems. Unsupervised methods are adopted to avoid reliance on labeled data needed for supervised learning. In recent years, Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and its variants (Titov and McDonald, 2008; Brody and Elhadad, 2010; Zhao et al., 2010; Mukherjee and Liu, 2012) have become the dominant unsupervised approach for aspect extraction. LDA models the corpus as a mixture of topics (aspects), and topics as distributions over word types. While the mixture of aspects discovered by LDA-based models may describe a corpus fairly well, we find that the individual aspects inferred are of poor quality – aspects often consist of unrelated or loosely-related concepts. This may substantially reduce users’ confidence in using such automated systems. There could be two primary reasons for the poor quality. Conventional LDA models do not directly encode word co-occurrence statistics which are the primary source of information to preserve topic coherence (Mimno et al., 2011). They implicitly capture such patterns by modeling word generation from the document level, assuming that each word is generated independently. Furthermore, LDA-based models need to estimate a distribution of topics for each document. Review documents tend to be short, thus making the estimation of topic distributions more difficult. In this work, we present a novel neural approach to tackle the weaknesses of LDA-based methods. We start with neural word embeddings that al388 ready map words that usually co-occur within the same context to nearby points in the embedding space (Mikolov et al., 2013). We then filter the word embeddings within a sentence using an attention mechanism (Bahdanau et al., 2015) and use the filtered words to construct aspect embeddings. The training process for aspect embeddings is analogous to autoencoders, where we use dimension reduction to extract the common factors among embedded sentences and reconstruct each sentence through a linear combination of aspect embeddings. The attention mechanism deemphasizes words that are not part of any aspect, allowing the model to focus on aspect words. We call our proposed model Attention-based Aspect Extraction (ABAE). In contrast to LDA-based models, our proposed method explicitly encodes word-occurrence statistics into word embeddings, uses dimension reduction to extract the most important aspects in the review corpus, and uses an attention mechanism to remove irrelevant words to further improve coherence of the aspects. We have conducted extensive experiments on large review data sets. The results show that ABAE is effective in discovering meaningful and coherent aspects. It substantially outperforms baseline methods on multiple evaluation tasks. In addition, ABAE is intuitive and structurally simple. It can also easily scale to a large amount of training data. Therefore, it is a promising alternative to LDA-based methods proposed previously. 2 Related Work The problem of aspect extraction has been well studied in the past decade. Initially, methods were mainly based on manually defined rules. Hu and Liu (2004) proposed to extract different product features through finding frequent nouns and noun phrases. They also extracted opinion terms by finding the synonyms and antonyms of opinion seed words through WordNet. Following this, a number of methods have been proposed based on frequent item mining and dependency information to extract product aspects (Zhuang et al., 2006; Somasundaran and Wiebe, 2009; Qiu et al., 2011). These models heavily depend on predefined rules which work well only when the aspect terms are restricted to a small group of nouns. Supervised learning approaches generally model aspect extraction as a standard sequence labeling problem. Jin and Ho (2009) and Li et al. (2010) proposed to use hidden Markov models (HMM) and conditional random fields (CRF), respectively with a set of manually-extracted features. More recently, different neural models (Yin et al., 2016; Wang et al., 2016) were proposed to automatically learn features for CRF-based aspect extraction. Rule-based models are usually not refined enough to categorize the extracted aspect terms. On the other hand, supervised learning requires large amounts of labeled data for training purposes. Unsupervised approaches, especially topic models, have been proposed subsequently to avoid reliance on labeled data. Generally, the outputs of those models are word distributions or rankings for each aspect. Aspects are naturally obtained without separately performing extraction and categorization. Most existing works (Brody and Elhadad, 2010; Zhao et al., 2010; Mukherjee and Liu, 2012; Chen et al., 2014) are based on variants and extensions of LDA (Blei et al., 2003). Recently, Wang et al. (2015) proposed a restricted Boltzmann machine (RBM)-based model to simultaneously extract aspects and relevant sentiments of a given review sentence, treating aspects and sentiments as separate hidden variables in RBM. However, the RBM-based model proposed in (Wang et al., 2015) relies on a substantial amount of prior knowledge such as part-of-speech (POS) tagging and sentiment lexicons. A biterm topic model (BTM) that generates co-occurring word pairs was proposed in (Yan et al., 2013). We experimentally compare ABAE and BTM on multiple tasks in this paper. Attention models (Mnih et al., 2014) have recently gained popularity in training neural networks and have been applied to various natural language processing tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), sentence summarization (Rush et al., 2015), sentiment classification (Chen et al., 2016; Tang et al., 2016), and question answering (Hermann et al., 2015). Rather than using all available information, attention mechanism aims to focus on the most pertinent information for a task. Unlike previous works, in this paper, we apply attention to an unsupervised neural model. Our experimental results demonstrate its effectiveness under an unsupervised setting for aspect extraction. 389 3 Model Description We describe the Attention-based Aspect Extraction (ABAE) model in this section. The ultimate goal is to learn a set of aspect embeddings, where each aspect can be interpreted by looking at the nearest words (representative words) in the embedding space. We begin by associating each word w in our vocabulary with a feature vector ew ∈Rd. We use word embeddings for the feature vectors as word embeddings are designed to map words that often co-occur in a context to points that are close by in the embedding space (Mikolov et al., 2013). The feature vectors associated with the words correspond to the rows of a word embedding matrix E ∈RV ×d, where V is the vocabulary size. We want to learn embeddings of aspects, where aspects share the same embedding space with words. This requires an aspect embedding matrix T ∈RK×d, where K, the number of aspects defined, is much smaller than V . The aspect embeddings are used to approximate the aspect words in the vocabulary, where the aspect words are filtered through an attention mechanism. Each input sample to ABAE is a list of indexes for words in a review sentence. Given such an input, two steps are performed as shown in Figure 1. First, we filter away non-aspect words by down-weighting them using an attention mechanism, and construct a sentence embedding zs from weighted word embeddings. Then, we try to reconstruct the sentence embedding as a linear combination of aspect embeddings from T. This process of dimension reduction and reconstruction, where ABAE aims to transform sentence embeddings of the filtered sentences (zs) into their reconstructions (rs) with the least possible amount of distortion, preserves most of the information of the aspect words in the K embedded aspects. We next describe the process in detail. 3.1 Sentence Embedding with Attention Mechanism We construct a vector representation zs for each input sentence s in the first step. In general, we want the vector representation to capture the most relevant information with regards to the aspect (topic) of the sentence. We define the sentence embedding zs as the weighted summation of word embeddings ewi, i = 1, ..., n corresponding to the Figure 1: An example of the ABAE structure. word indexes in the sentence. zs = n X i=1 aiewi. (1) For each word wi in the sentence, we compute a positive weight ai which can be interpreted as the probability that wi is the right word to focus on in order to capture the main topic of the sentence. The weight ai is computed by an attention model, which is conditioned on the embedding of the word ewi as well as the global context of the sentence: ai = exp(di) Pn j=1 exp(dj) (2) di = e⊤ wi · M · ys (3) ys = 1 n n X i=1 ewi (4) where ys is simply the average of the word embeddings, which we believe captures the global context of the sentence. M ∈Rd×d is a matrix mapping between the global context embedding ys and the word embedding ew and is learned as part of the training process. We can think of the attention mechanism as a two-step process. Given a sentence, we first construct its representation by averaging all the word representations. Then the weight of a word is assigned by considering two things. First, we filter the word through the transformation M which is able to capture the relevance of the word to the K aspects. Then we capture the relevance of the filtered word to the sentence by taking the inner product of the filtered word to the global context ys. 390 3.2 Sentence Reconstruction with Aspect Embeddings We have obtained the sentence embedding. Now we describe how to compute the reconstruction of the sentence embedding. As shown in Figure 1, the reconstruction process consists of two steps of transitions, which is similar to an autoencoder. Intuitively, we can think of the reconstruction as a linear combination of aspect embeddings from T: rs = T⊤· pt (5) where rs is the reconstructed vector representation, pt is the weight vector over K aspect embeddings, where each weight represents the probability that the input sentence belongs to the related aspect. pt can simply be obtained by reducing zs from d dimensions to K dimensions and then applying a softmax non-linearity that yields normalized non-negative weights: pt = softmax(W · zs + b) (6) where W, the weighted matrix parameter, and b, the bias vector, are learned as part of the training process. 3.3 Training Objective ABAE is trained to minimize the reconstruction error. We adopted the contrastive max-margin objective function used in previous work (Weston et al., 2011; Socher et al., 2014; Iyyer et al., 2016). For each input sentence, we randomly sample m sentences from our training data as negative samples. We represent each negative sample as ni which is computed by averaging its word embeddings. Our objective is to make the reconstructed embedding rs similar to the target sentence embedding zs while different from those negative samples. Therefore, the unregularized objective J is formulated as a hinge loss that maximize the inner product between rs and zs and simultaneously minimize the inner product between rs and the negative samples: J(θ) = X s∈D m X i=1 max(0, 1 −rszs + rsni) (7) where D represents the training data set and θ = {E, T, M, W, b} represents the model parameters. Domain #Reviews #Labeled sentences Restaurant 52,574 3,400 Beer 1,586,259 9,245 Table 1: Dataset description. 3.4 Regularization Term We hope to learn vector representations of the most representative aspects for a review dataset. However, the aspect embedding matrix T may suffer from redundancy problems during training. To ensure the diversity of the resulting aspect embeddings, we add a regularization term to the objective function J to encourage the uniqueness of each aspect embedding: U(θ) = ∥Tn · T⊤ n −I∥ (8) where I is the identity matrix, and Tn is T with each row normalized to have length 1. Any nondiagonal element tij(i ̸= j) in the matrix Tn · T⊤ n corresponds to the dot product of two different aspect embeddings. U reaches its minimum value when the dot product between any two different aspect embeddings is zero. Thus the regularization term encourages orthogonality among the rows of the aspect embedding matrix T and penalizes redundancy between different aspect vectors. Our final objective function L is obtained by adding J and U: L(θ) = J(θ) + λU(θ) (9) where λ is a hyperparameter that controls the weight of the regularization term. 4 Experimental Setup 4.1 Datasets We evaluate our method on two real-word datasets. The detailed statistics of the datasets are summarized in Table 1. (1) Citysearch corpus: This is a restaurant review corpus widely used by previous works (Ganu et al., 2009; Brody and Elhadad, 2010; Zhao et al., 2010), which contains over 50,000 restaurant reviews from Citysearch New York. Ganu et al. (2009) also provided a subset of 3,400 sentences from the corpus with manually labeled aspects. These annotated sentences are used for evaluation of aspect identification. There are six manually defined aspect labels: Food, Staff, Ambience, Price, Anecdotes, and Miscellaneous. 391 (2) BeerAdvocate: This is a beer review corpus introduced in (McAuley et al., 2012), containing over 1.5 million reviews. A subset of 1,000 reviews, corresponding to 9,245 sentences, are annotated with five aspect labels: Feel, Look, Smell, Taste, and Overall. 4.2 Baseline Methods To validate the performance of ABAE, we compare it against a number of baselines: (1) LocLDA (Brody and Elhadad, 2010): This method uses a standard implementation of LDA. In order to prevent the inference of global topics and direct the model towards rateable aspects, each sentence is treated as a separate document. (2) k-means: We initialize the aspect matrix T by using the k-means centroids of the word embeddings. To show the power of ABAE, we compare its performance with using the kmeans centroids directly. (3) SAS (Mukherjee and Liu, 2012): This is a hybrid topic model that jointly discovers both aspects and aspect-specific opinions. This model has been shown to be competitive among topic models in discovering meaningful aspects (Mukherjee and Liu, 2012; Wang et al., 2015). (4) BTM (Yan et al., 2013): This is a biterm topic model that is specially designed for short texts such as texts from social media and review sites. The major advantage of BTM over conventional LDA models is that it alleviates the problem of data sparsity in short documents by directly modeling the generation of unordered word-pair co-occurrences (biterms) over the corpus. It has been shown to perform better than conventional LDA models in discovering coherent topics. 4.3 Experimental Settings Review corpora are preprocessed by removing punctuation symbols, stop words, and words appearing less than 10 times. For LocLDA, we use the open-source implementation GibbsLDA++1 and for BTM, we use the implementation released by (Yan et al., 2013)2. We tune the hyperparameters of all topic model baselines on a held-out set 1http://gibbslda.sourceforge.net 2http://code.google.com/p/btm/ with grid search using the topic coherence metric to be introduced later in Eq 10: for LocLDA, the Dirichlet priors α = 0.05 and β = 0.1; for SAS and BTM, α = 50/K and β = 0.1. We run 1,000 iterations of Gibbs sampling for all topic models. For the ABAE model, we initialize the word embedding matrix E with word vectors trained by word2vec with negative sampling on each dataset, setting the embedding size to 200, window size to 10, and negative sample size to 5. The parameters we use for training word embeddings are standard with no specific tuning to our data. We also initialize the aspect embedding matrix T with the centroids of clusters resulting from running k-means on word embeddings. Other parameters are initialized randomly. During the training process, we fix the word embedding matrix E and optimize other parameters using Adam (Kingma and Ba, 2014) with learning rate 0.001 for 15 epochs and batch size of 50. We set the number of negative samples per input sample m to 20, and the orthogonality penalty weight λ to 1 by tuning the hyperparameters on a held-out set with grid search. The results reported for all models are the average over 10 runs. Following (Brody and Elhadad, 2010; Zhao et al., 2010), we set the number of aspects for the restaurant corpus to 14. We experimented with different number of aspects from 10 to 20 for the beer corpus. The results showed no major difference, so we also set it to 14. As in previous work (Brody and Elhadad, 2010; Zhao et al., 2010), we manually mapped each inferred aspect to one of the gold-standard aspects according to its top ranked representative words. In ABAE, representative words of an aspect can be found by looking at its nearest words in the embedding space using cosine as the similarity metric. 5 Evaluation and Results We describe the evaluation tasks and report the experimental results in this section. We evaluate ABAE on two criteria: • Is it able to find meaningful and semantically coherent aspects? • Is it able to improve aspect identification performance on real-world review datasets? 5.1 Aspect Quality Evaluation Table 2 presents all 14 aspects inferred by ABAE for the restaurant domain. Compared to gold392 Inferred Aspects Representative Words Gold Aspects Main Dishes beef, duck, pork, mahi, filet, veal Food Dessert gelato, banana, caramel, cheesecake, pudding, vanilla Drink bottle, selection, cocktail, beverage, pinot, sangria Ingredient cucumber, scallion, smothered, stewed, chilli, cheddar General cooking, homestyle, traditional, cuisine, authentic, freshness Physical Ambience wall, lighting, ceiling, wood, lounge, floor Ambience Adjectives intimate, comfy, spacious, modern, relaxing, chic Staff waitstaff, server, staff, waitress, bartender, waiter Staff Service unprofessional, response, condescending, aggressive, behavior, rudeness Price charge, paid, bill, reservation, came, dollar Price Anecdotes celebrate, anniversary, wife, fiance, recently, wedding Anecdotes Location park, street, village, avenue, manhattan, brooklyn Misc. General excellent, great, enjoyed, best, wonderful, fantastic Other aged, reward, white, maison, mediocrity, principle Table 2: List of inferred aspects for restaurant reviews (left), with top representative words for each inferred aspect (middle), and the corresponding gold-standard aspect labels (right). Inferred aspect labels (left) were assigned manually. Figure 2: Average coherence score versus number of top n terms for the restaurant domain (top) and beer domain (bottom). standard labels, the inferred aspects are more finegrained. For example, it can distinguish main dishes from desserts, and drinks from food. 5.1.1 Coherence Score In order to objectively measure the quality of aspects, we use coherence score as a metric which has been shown to correlate well with human judgment (Mimno et al., 2011). Given an aspect z and a set of top N words of z, Sz = {wz 1, ..., wz N}, the coherence score is calculated as follows: C(z; Sz) = N X n=2 n−1 X l=1 logD2(wz n, wz l ) + 1 D1(wz l ) (10) where D1(w) is the document frequency of word w and D2(w1, w2) is the co-document frequency of words w1 and w2. A higher coherence score indicates a better aspect interpretability, i.e., more meaningful and semantically coherent. Figure 2 shows the average coherence score of each model which is computed as 1 K PK k=1 C(zk; Szk) on both the restaurant domain and beer domain. From the results, we make the following observations: (1) ABAE outperforms previous models for all ranked buckets. (2) BTM performs slightly better than LocLDA and SAS. This may be because BTM directly models the generation of biterms, while conventional LDA just implicitly captures such patterns by modeling word generation from the document level. (3) It is interesting to note that performing k-means on the word embeddings is sufficient to perform better than all topic model baselines, including BTM. This indicates that neural word embedding is a better model for capturing co-occurrence than LDA, even for BTM which specifically models the generation of co-occurring word pairs. k-means LocLDA SAS BTM ABAE Restaurant 11 8 9 9 11 Beer 9 8 8 9 10 Table 3: Number of coherent aspects. K (number of aspects) = 14 for all models. 5.1.2 User Evaluation As we want to discover a set of aspects that the human user finds agreeable, it is also necessary 393 Figure 3: Average p@n over all coherent aspects for the restaurant domain (left) and beer domain (right). to carry out user evaluation directly. Following the experimental setting in (Chen et al., 2014), we recruited three human judges. Each aspect is labeled as coherent if the majority of judges assess that most of its top 50 terms coherently represent a product aspect. The numbers of coherent aspects discovered by each model are shown in Table 3. ABAE discovers the most number of coherent aspects compared with other models. For a coherent aspect, each of its top terms is labeled as correct if and only if the majority of judges assess that it reflects the related aspect. We adopt precision@n (or p@n) to evaluate the results, which was also used in (Mukherjee and Liu, 2012; Chen et al., 2014). Figure 3 shows the average p@n results over all coherent aspects for each domain. We can see that the user evaluation results correlate well with the coherence scores shown in Figure 2, where ABAE substantially outperforms all other models for all ranked buckets, especially for large values of n. 5.2 Aspect Identification We evaluate the performance of sentence-level aspect identification on both domains using the annotated sentences shown in Table 1. The evaluation criterion is to judge how well the predictions match the true labels, measured by precision, recall, and F1 scores. The results4 are shown in Table 4 and Table 5. Given a review sentence, ABAE first assigns an inferred aspect label which corresponds to the highest weight in pt calculated as shown in Equation 6 . And we then assign the gold-standard label to the sentence according to the mapping between inferred aspects and gold-standard labels. 3k-means assigns a sentence an inferred aspect whose embedding is the closest to the averaged word embeddings of the sentence. 4Note that the values of P/R/F1 reported are the average over 10 runs (except some values taken from published results in Table 4). Thus the F1 values cannot be computed directly from corresponding P/R values Aspect Method Precision Recall F1 LocLDA 0.898 0.648 0.753 ME-LDA 0.874 0.787 0.828 SAS 0.867 0.772 0.817 Food BTM 0.933 0.745 0.816 SERBM 0.891 0.854 0.872 k-means3 0.931 0.647 0.755 ABAE 0.953 0.741 0.828 LocLDA 0.804 0.585 0.677 ME-LDA 0.779 0.540 0.638 SAS 0.774 0.556 0.647 Staff BTM 0.828 0.579 0.677 SERBM 0.819 0.582 0.680 k-means 0.789 0.685 0.659 ABAE 0.802 0.728 0.757 LocLDA 0.603 0.677 0.638 ME-LDA 0.773 0.558 0.648 SAS 0.780 0.542 0.640 Ambience BTM 0.813 0.599 0.685 SERBM 0.805 0.592 0.682 k-means 0.730 0.637 0.677 ABAE 0.815 0.698 0.740 Table 4: Aspect identification results on the restaurant domain. The results of LocLDA and MELDA are taken from (Zhao et al., 2010); the results of SAS and SERBM are taken from (Wang et al., 2015). For the restaurant domain, we follow the experimental settings of previous work (Brody and Elhadad, 2010; Zhao et al., 2010; Wang et al., 2015) to make our results comparable. To do that, (1) we only used the single-label sentences for evaluation to avoid ambiguity (about 83% of labeled sentences have a single label), and (2) we only evaluated on three major aspects, namely Food, Staff, and Ambience. The other aspects do not show clear patterns in either word usage or writing style, which makes these aspects very hard for even humans to identify. Besides the baseline models, we also compare the results with other published models, including MaxEnt-LDA (ME-LDA) (Zhao et al., 2010) and SERBM (Wang et al., 2015). SERBM has reported state-of-the-art results for aspect identification on the restaurant corpus to date. However, SERBM relies on a substantial amount of prior knowledge. 394 Aspect Method Precision Recall F1 Feel k-means 0.720 0.815 0.737 LocLDA 0.938 0.537 0.675 SAS 0.783 0.695 0.730 BTM 0.892 0.687 0.772 ABAE 0.815 0.824 0.816 Taste k-means 0.533 0.413 0.456 LocLDA 0.399 0.655 0.487 SAS 0.543 0.496 0.505 BTM 0.616 0.467 0.527 ABAE 0.637 0.358 0.456 Smell k-means 0.844 0.295 0.422 LocLDA 0.560 0.488 0.489 SAS 0.336 0.673 0.404 BTM 0.541 0.549 0.527 ABAE 0.483 0.744 0.575 Taste+Smell k-means 0.697 0.828 0.740 LocLDA 0.651 0.873 0.735 SAS 0.804 0.759 0.769 BTM 0.885 0.760 0.815 ABAE 0.897 0.853 0.866 Look k-means 0.915 0.696 0.765 LocLDA 0.963 0.676 0.774 SAS 0.958 0.705 0.806 BTM 0.953 0.854 0.872 ABAE 0.969 0.882 0.905 Overall k-means 0.693 0.648 0.639 LocLDA 0.558 0.690 0.603 SAS 0.618 0.664 0.619 BTM 0.699 0.715 0.700 ABAE 0.654 0.828 0.725 Table 5: Aspect identification results on the beer domain. We make the following observations from Table 4: (1) ABAE outperforms all other models on F1 score for aspects Staff and Ambience. (2) The F1 score of ABAE for Food is worse than SERBM while its precision is very high. We analyzed the errors and found that most of the sentences we failed to recognize as Food are general descriptions without specific food words appearing. For example, the true label for the sentence “The food is prepared quickly and efficiently.” is Food. ABAE assigns Staff to it as the highly focused words according to the attention mechanism are quickly and efficiently which are more related to Staff. In fact, although this sentence contains the word food, we think it is a rather general description of service. (3) ABAE substantially outperforms k-means for this task although both methods perform well for extracting coherent aspects as shown in Figure 2 and Figure 3. This shows the power brought by the attention mechanism, which is able to capture the main topic of a sentence by only focusing on aspect-related words. For the beer domain, in addition to the five goldstandard aspect labels, we also combined Taste and Smell to form a single aspect – Taste+Smell. This is because these two aspects are very similar Figure 4: Visualization of the attention layer. and many words can be used to describe both aspects. For example, the words spicy, bitter, fresh, sweet, etc. are top ranked representative words in both aspects, which makes it very hard even for humans to distinguish them. Since Taste and Smell are highly correlated and difficult to separate in real life, a natural way to evaluate is to treat them as a single aspect. We can see from Table 5 that due to the issue described above, all models perform poorly on Taste and Smell. ABAE outperforms previous models in F1 scores on all aspects except for Taste. The results demonstrate the capability of ABAE in identifying separable aspects. Aspect Method Precision Recall F1 Food ABAE− 0.898 0.739 0.791 ABAE 0.953 0.741 0.828 Staff ABAE− 0.784 0.669 0.693 ABAE 0.802 0.728 0.757 Ambience ABAE− 0.782 0.660 0.703 ABAE 0.815 0.698 0.740 Table 6: Comparison between ABAE and ABAE− on aspect identification on the restaurant domain. 5.3 Validating the Effectiveness of Attention Model Figure 4 shows the weights of words assigned by the attention model for some example sentences. As we can see, the weights learned by the model correspond very strongly with human intuition. In order to evaluate how attention model affects the overall performance of ABAE, we conduct experiments to compare ABAE and ABAE−on aspect identification, where ABAE−denotes the model in which the attention layer is switched off and sentence embedding is calculated by averaging its word embeddings: zs = 1 n Pn i=1 ewi. The results on the restaurant domain are shown in Table 6. ABAE achieves substantially higher precision and recall on all aspects compared with 395 ABAE−, which demonstrates the effectiveness of the attention mechanism. 6 Conclusion We have presented ABAE, a simple yet effective neural attention model for aspect extraction. In contrast to LDA models, ABAE explicitly captures word co-occurrence patterns and overcomes the problem of data sparsity present in review corpora. Our experimental results demonstrated that ABAE not only learns substantially higher quality aspects, but also more effectively captures the aspects of reviews than previous methods. To the best of our knowledge, we are the first to propose an unsupervised neural approach for aspect extraction. ABAE is intuitive and structurally simple, and also scales up well. All these benefits make it a promising alternative to LDA-based methods in practice. Acknowledgements This research is partially funded by the Economic Development Board and the National Research Foundation of Singapore. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations. David Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research 3:993–1022. Samuel Brody and Noemie Elhadad. 2010. An unsupervised aspect-sentiment model for online reviews. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Huimin Chen, Maosong Sun, Cunchao Tu, Yankai Lin, and Zhiyuan Liu. 2016. Neural sentiment classification with user and product attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Zhiyuan Chen, Arjun Mukherjee, and Bing Liu. 2014. Aspect extraction with automated prior knowledge learning. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Gayatree Ganu, Noemie Elhadad, and Am´elie Marian. 2009. Beyond the stars: Improving rating predictions using review text content. In Proceedings of the 12th International Workshop on the Web and Databases. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jordan Boyd-Graber, and Hal Daum´e III. 2016. Feuding families and former friends: Unsupervised learning for dynamic fictional relationship. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Wei Jin and Hung Hay Ho. 2009. A novel lexicalized HMM-based learning framework for web opinion mining. In Proceedings of the 26th International Conference on Machine Learning. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the 2nd International Conference on Learning Representations. Fangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Ying-Ju Xia, Shu Zhang, and Hao Yu. 2010. Structure-aware review mining and summarization. In Proceedings of the 23rd International Conference on Computational Linguistics. Bing Liu. 2012. Sentiment Analysis and Opinion Mining. Morgan & Claypool publishers. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multiaspect reviews. In Proceedings of the 12th IEEE International Conference on Data Mining. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. David Mimno, Hanna M. Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. 396 Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. 2014. Recurrent models of visual attention. In Advances in Neural Information Processing Systems. Arjun Mukherjee and Bing Liu. 2012. Aspect extraction through semi-supervised modeling. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational Linguistics 37:9–27. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics 2. Swapna Somasundaran and Janyce Wiebe. 2009. Recognizing stances in online debates. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Ivan Titov and Ryan McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceedings of the 17th International World Wide Web Conference. Linlin Wang, Kang Liu, Zhu Cao, Jun Zhao, and Gerard de Melo. 2015. Sentiment-aspect extraction based on restricted Boltzmann machines. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Wenya Wang, Sinno J. Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Scaling up to large vocabulary image annotation. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence. Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2013. A biterm topic model for short texts. In Proceedings of the 22nd International World Wide Web Conference. Yichun Yin, Furu Wei, Li Dong, Kaiming Xu, Ming Zhang, and Ming Zhou. 2016. Unsupervised word and dependency path embeddings for aspect term extraction. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. Wayne Xin Zhao, Jing Jiang, Hongfei Yan, and Xiaoming Li. 2010. Jointly modeling aspects and opinions with a MaxEnt-LDA hybrid. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management. 397
2017
36
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 398–408 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1037 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 398–408 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1037 Other Topics You May Also Agree or Disagree: Modeling Inter-Topic Preferences using Tweets and Matrix Factorization Akira Sasaki, Kazuaki Hanawa, Naoaki Okazaki, and Kentaro Inui Graduate School of Information Sciences Tohoku University {aki-s, hanawa, okazaki, inui}@ecei.tohoku.ac.jp Abstract We present in this paper our approach for modeling inter-topic preferences of Twitter users: for example, those who agree with the Trans-Pacific Partnership (TPP) also agree with free trade. This kind of knowledge is useful not only for stance detection across multiple topics but also for various real-world applications including public opinion surveys, electoral predictions, electoral campaigns, and online debates. In order to extract users’ preferences on Twitter, we design linguistic patterns in which people agree and disagree about specific topics (e.g., “A is completely wrong”). By applying these linguistic patterns to a collection of tweets, we extract statements agreeing and disagreeing with various topics. Inspired by previous work on item recommendation, we formalize the task of modeling intertopic preferences as matrix factorization: representing users’ preferences as a usertopic matrix and mapping both users and topics onto a latent feature space that abstracts the preferences. Our experimental results demonstrate both that our proposed approach is useful in predicting missing preferences of users and that the latent vector representations of topics successfully encode inter-topic preferences. 1 Introduction Social media have changed the way people shape public opinion. The latest survey by the Pew Research Center reported that a majority of US adults (62%) obtain news via social media, and of those, 18% do so often (Gottfried and Shearer, 2016). Given that news and opinions are shared and amplified by friend networks of individuals (Jamieson and Cappella, 2008), individuals are thereby isolated from information that does not fit well with their opinions (Pariser, 2011). Ironically, cutting-edge social media technologies promote ideological groups even with its potential to deliver diverse information. A large number of studies already analyzed discussions, interactions, influences, and communities on social media along the political spectrum from liberal to conservative (Adamic and Glance, 2005; Zhou et al., 2011; Cohen and Ruths, 2013; Bakshy et al., 2015; Wong et al., 2016). Even though these studies provide intuitive visualizations and interpretations along the liberalconservative axis, political analysts argue that the axis is flawed and insufficient for representing public opinion and ideologies (Kerlinger, 1984; Maddox and Lilie, 1984). A potential solution for analyzing multiple axes of the political spectrum on social media is stance detection (Thomas et al., 2006; Somasundaran and Wiebe, 2009; Murakami and Raymond, 2010; Anand et al., 2011; Walker et al., 2012; Mohammad et al., 2016; Johnson and Goldwasser, 2016), whose task is to determine whether the author of a text is for, neutral, or against a topic (e.g., free trade, immigration, abortion). However, stance detection across different topics is extremely difficult. Anand et al. (2011) reported that a sophisticated method with topic-dependent features substantially improved the performance of stance detection within a topic, but such an approach could not outperform a baseline method with simple n-gram features when evaluated across topics. More recently, all participants of SemEval 2016 Task 6A (with five topics) could not outperform the baseline supervised method using n-gram features (Mohammad et al., 2016). In addition, stance detection encounters dif398 1.0 1.0 1.0 0.5 0.7 -1.0 -1.0 -0.7 -0.5 User 1 User 2 User 3 User 4 Topic 1 Topic 2 Topic 3 Topic 4 ~ ~ R P T × Q = 0.9 0.9 1.0 0.5 0.7 -1.0 -1.0 -0.7 -0.5 User 1 User 2 User 3 User 4 Topic 1 Topic 2 Topic 3 Topic 4 R -0.1 -0.1 -0.1 -0.7 0.3 -0.4 -0.9 ^ Corpus (tweets) (User-topic matrix) (User vectors) (Topic vectors) (Low-rank approximation) A good news. http://t.to/...... #TPPhantai TPP ruins the future of our country. ............ ............ ............ A ruins the future of our country. I support A A is necessary Welcome A We should introduce A ...... I disagree A A is completely wrong A ruins the future of our country ...... A good news. http://t.to/...... #TPPhantai TPP ruins the future of our country. ............ ............ ............ A good news. http://t.to/...... #TPPhantai TPP ruins the future of our country. ............ ............ ............ A good news. http://t.to/...... #TPPhantai TPP ruins the future of our country. ............ ............ ............ Tweets posted by users who have used pro/con hastags A is completely wrong to A We should introduce A This is A Linguistic pro/con patterns Pattern candidates in which the users describe topics Matrix factorization Extraction of topics, users, and pattern candidates Sort candidates and select useful patterns Mine topic preferences pro con Figure 1: An overview of this study. ficulties with different user types. Cohen and Ruths (2013) observed that existing methods on stance detection fail on “ordinary” users because such methods primarily obtain training and test data from politically vocal users (e.g., politicians); for example, they found that a stance detector trained on a dataset with politicians achieved 91% accuracy on other politicians but only achieved 54% accuracy on “ordinary” users. Establishing a bridge across different topics and users remains a major challenge not only in stance detection, but also in social media analytics. An important component in establishing this bridge is commonsense knowledge about topics. For example, consider a topic a revision of Article 96 of the Japanese Constitution. We infer that the statement “we should maintain armed forces” tends to favor this topic even without any lexical overlap between the topic and the statement. This inference is reasonable because: the writer of the statement favors armed forces; those who favor armed forces also favor a revision of Article 91; and those who favor a revision of Article 9 also favor a revision of Article 962. In general, this kind of commonsense knowledge can be expressed in 1Article 9 prohibits armed forces in Japan. 2Article 96 specifies high requirements for making amendments to Constitution of Japan (including Article 9). the format: those who agree/disagree with topic A also agree/disagree with topic B. We call this kind of knowledge inter-topic preference throughout this paper. We conjecture that previous work on stance detection indirectly learns inter-topic preferences within the same target through the use of n-gram features on a supervision data. In contrast, in the present paper, we directly acquire inter-topic preferences from an unlabeled corpus of tweets. This acquired knowledge regarding inter-topic preferences is useful not only for stance detection, but also for various real-world applications including public opinion survey, electoral campaigns, electoral predictions, and online debates. Figure 1 provides an overview of this work. In our system, we extract linguistic patterns in which people agree and disagree about specific topics (e.g., “A is completely wrong”); to accomplish this, as described in Section 2.1, we make use of hashtags within a large collection of tweets. The patterns are then used to extract instances of users’ preferences regarding various topics, as detailed in Section 2.2. Inspired by previous work on item recommendation, in Section 3, we formalize the task of modeling inter-topic preferences as a matrix factorization: representing a sparse user-topic matrix (i.e., the extracted instances) with the prod399 uct of low-rank user and topic matrices. These low-rank matrices provide latent vector representations of both users and topics. This approach is also useful for completing preferences of “ordinary” (i.e., less vocal) users, which fills the gap between different types of users. The contributions of this paper are threefold. 1. To the best of our knowledge, this is the first study that models inter-topic preferences for unlimited targets on real-world data. 2. Our experimental results show that this approach can accurately predict missing topic preferences of users accurately (80–94%). 3. Our experimental results also demonstrate that the latent vector representations of topics successfully encode inter-topic preferences, e.g., those who agree with nuclear power plants also agree with nuclear fuel cycles. This study uses a Japanese Twitter corpus because of its availability from the authors, but the core idea is applicable to any language. 2 Mining Topic Preferences of Users In this section, we describe how we collect statements in which users agree or disagree with various topics on Twitter, which then serves as source data for modeling inter-topic preferences. More formally, we are interested in acquiring a collection of tuples (u, t, v), where: u ∈U is a user; U is the set of all users on Twitter; t ∈T is a topic; T is the set of all topics; and v ∈{+1, −1} is +1 when the user u agrees with the topic t and −1 otherwise (i.e., disagreement). Throughout this work, we use a corpus consisting of 35,328,745,115 Japanese tweets (7,340,730 users) crawled from February 6, 2013 to September 30, 2016. We removed retweets from the corpus. 2.1 Mining Linguistic Patterns of Agreement and Disagreement We use linguistic patterns to extract tuples (u, t, v) from the aforementioned corpus. More specifically, when a tweet message matches to one of linguistic patterns of agreement (e.g., “t is necessary”), we regard that the author u of the tweet agrees with topic t. Conversely, a statement of disagreement is identified by linguistic patterns for disagreement (e.g., “t is unacceptable”). In order to design linguistic patterns, we focus on hashtags appearing in the corpus that have been popular clues for locating subjective statements such as sentiments (Davidov et al., 2010), emotions (Qadir and Riloff, 2014), and ironies (Van Hee et al., 2016). Hashtags are also useful for finding strong supporters and critics, as well as their target topics; for example, #immigrantsWelcome indicates that the author favors immigrants; and #StopAbortion is against abortion. Based on this intuition, we design regular expressions for both pro hashtags “#(.+)sansei”3 and con hashtags “#(.+)hantai”4, where (.+) matches a target topic. These regular expressions can find users who have strong preferences to topics. Using this approach, we extracted 31,068 occurrences of pro/con hashtags used by 18,582 users for 4,899 topics. We regard the set of topics found using this procedure as set of target topics T in this study. Each time we encounter a tweet containing a pro/con hashtag, we searched for corresponding textual statements as follows. Suppose that a tweet includes a hashtag (e.g., #TPPsansei) for a topic t (e.g., TPP). Assuming that the author of the given tweet does not change their attitude toward a topic over time, we search for other tweets posted by the same author that also have the topic keyword t. This process retrieves tweets like “I support TPP.” Then, we replace the topic keyword into a variable A to extract patterns, e.g., “I support A.” Here, the definition of the pattern unit is language specific. For Japanese tweets, we simply recognize a pattern that starts with a variable (i.e., topic) and ends at the end of the sentence5. Because this procedure also extracts useless patterns such as “to A” and “this is A”, we manually choose useful patterns in a systematic way: sort patterns in descending order of the number of users who use the pattern; and check the sorted list of patterns manually; and remove useless patterns. 3Unlike English hashtags, we systematically attach a noun sansei, which stands for pro (agreement) in Japanese, to a topic, for example, #TPPsansei. This paper uses the alphabetical expression sansei only for explanation; the actual pattern uses Chinese characters corresponding to sansei. 4A Japanese noun hantai stands for con (disagreement), for example, #TPPhantai. This paper uses the alphabetical expression hantai only for explanation; the actual pattern uses Chinese characters corresponding to hantai. 5In English, this treatment roughly corresponds to extracting a verb phrase with the variable A. 400 Using this approach, we obtained 100 pro patterns (e.g., “welcome A” and “A is necessary”) and 100 con patterns (“do not let A” and “I don’t want A”). 2.2 Extracting Instances of Topic Preferences By using the pro and con patterns acquired using the approach described in Section 2.1, we extract instances of (u, t, v) as follows. When a sentence in a tweet whose author is user u matches one of the pro patterns (e.g., “t is necessary”) and the topic t is included in the set of target topics T, we recognize this as an instance of (u, t, +1). Similarly, when a sentence matches one of the con patterns (e.g., “I don’t want t”) and the topic t is included in the set of target topics T, we recognize this as an instance of (u, t, −1). Using this approach, we collected 25,805,909 tuples corresponding to 3,302,613 users and 4,899 topics. Because these collected tuples included comparatively infrequent users and topics, we removed users and topics that appeared less than five times. In addition, there were also meaningless frequent topics such as “of” and “it”. Therefore, we sorted topics in descending order of their co-occurrence frequencies with each of the pro patterns and con patterns, and then removed meaningless topics in the top 100 topics. This resulted in 9,961,509 tuples regarding 273,417 users and 2,323 topics. 3 Matrix Factorization Using the methods described in Section 2, from the corpus, we collected a number of instances of users’ preferences regarding various topics. However, Twitter users do not necessarily express preferences for all topics. In addition, it is by nature impossible to predict whether a new (i.e., nonexistent in the data) user agrees or disagrees with given topics. Therefore, in this section, we apply matrix factorization (Koren et al., 2009) in order to predict missing values, inspired by research regarding item recommendation (Bell and Koren, 2007; Dror et al., 2011). In essence, matrix factorization maps both users and topics onto a latent feature space that abstracts topic preferences of users. Here, let R be a sparse matrix of |U|×|T|. Only when a user u expresses a preference for topic t do we compute an element of the sparse matrix ru,t, ru,t = #(u, t, +1) −#(u, t, −1) #(u, t, +1) + #(u, t, −1) (1) Here, #(u, t, +1) and #(u, t, −1) represent the numbers of occurrences of instances (u, t, +1) and (u, t, −1), respectively. Thus, an element ru,t approaches +1 as the user u favors the topic t, and −1 otherwise. If the user u does not make any statement regarding the topic t (i.e., neither (u, t, +1) nor (u, t, −1) exists in the data), we do not fill the corresponding element, leaving it as a missing value. Matrix factorization decomposes the sparse matrix R into low-dimensional matrices P ∈Rk×|U| and Q ∈Rk×|T|, where k is a parameter that specifies the number of dimensions of the latent space. We minimize the following objective function to find the matrices P and Q, min P,Q X (u,t)∈R  (ru,t −pu⊺qt)2 +λP ∥pu∥2 + λQ ∥qt∥2  . (2) Here, (u, t) ∈R is repeated for elements filled in the sparse matrix R, pu ∈Rk and qv ∈Rk are u column vectors of P and v column vectors of Q, respectively, and λP ≥0 and λQ ≥0 represent coefficients of regularization terms. We call pu and qt the user vector and topic vector, respectively. Using these user and topic vectors, we can predict an element ˆru,t that may be missing in the original matrix R, ˆru,t ≃pu⊺qt. (3) We use libmf6 (Chin et al., 2015) to solve the optimization problem in Equation 2. We set regularization coefficients λP = 0.1 and λQ = 0.1 and use default values for the other parameters of libmf. 4 Evaluation 4.1 Determining the Dimension Parameter k How good is the low-rank approximation found by matrix factorization? And can we find the “sweet spot” for the number of dimensions k of the latent space? We investigate the reconstruction error of matrix factorization using different values of k to answer these questions. We use Root Mean Squared Error (RMSE) to measure error, RMSE = sP (u,t)∈R (pu⊺qt −ru,t)2 N . (4) 6https://github.com/cjlin1/libmf 401 k=1 k=2 k=5 k=10 k=30 k=50 k=100 k=300 k=500 Figure 2: Reconstruction error (RMSE) of matrix factorization with different k. Here, N is the number of elements in the sparse matrix R (i.e., the number of known values). Figure 2 shows RMSE values over iterations of libmf with the dimension parameter k ∈ {1, 2, 5, 10, 30, 50, 100, 300, 500}. We observed that the reconstruction error decreased as the iterative method of libmf progressed. The larger the number of dimensions k was, the smaller the reconstruction error became; the lowest reconstruction error was 0.3256 with k = 500. We also observed the error with k = 1, which corresponds to mapping users and topics onto one dimension similarly to the political spectrum of liberal and conservative. Judging from the relatively high RMSE values with k = 1, we conclude that it may be difficult to represent everything in the data using a one-dimensional axis. Based on this result, we concluded that matrix factorization with k = 100 is sufficient for reconstructing the original matrix R and therefore used this parameter value for the rest of our experiments. 4.2 Predicting Missing Topic Preferences How accurately can the user and topic vectors predict missing topic preferences? To answer this question, we evaluate the accuracy in predicting hidden preferences in the matrix R as follows. First, we randomly selected 5% of existing elements in R and let Y represent the collection of the selected elements (test set). We then perform matrix factorization on the sparse matrix without the selected elements of Y , that is, only with the remaining 95% elements of R (training set). We define the accuracy of the prediction as 1 |Y | X u,t∈Y 1 (sign(ˆru,t) = sign(ru,t)) (5) Matrix factorization Majority baseline Figure 3: Prediction accuracy when changing the threshold for the number of known topic preferences of each user. Here, ru,t denotes the actual (i.e., self-declared) preference values, ˆru,t represents the preference value predicted by Equation 3, sign(.) represents the sign of the argument, and 1(.) yields 1 only when the condition described in the argument holds and 0 otherwise. In other words, Equation 5 computes the proportion of correct predictions to all predictions, assuming zero to be the decision boundary between pro and con. Figure 3 plots prediction accuracy values calculated from different sets of users. Here the xaxis represents a threshold θ, which filters out users whose declarations of topic preferences are no greater than θ topics. In other words, Figure 3 shows prediction accuracy when we know user preferences for at least θ topics. For comparison, we also include the majority baseline that predicts pro and con based on the majority of preferences regarding each topic in the training set. Our proposed method was able to predict missing preferences with an 82.1% accuracy for users stating preferences for at least five topics. This accuracy increased as our method received more information regarding the users, reaching a 94.0% accuracy when θ = 100. This result again indicates that our proposed method reasonably utilizes known preferences to complete missing preferences. In contrast, the performance of the majority baseline decreased as it received more information regarding the users. Because this result was rather counter-intuitive, we examined the cause of this phenomenon. Consequently, this result turned out to be reasonable because preferences of vocal users deviated from those of the average users. Figure 4 illustrates this finding, showing the mean of variances of preference values ru,t across self402 0 20 40 60 80 100 120 Threshold for the number of topics mentioned by users 0.45 0.50 0.55 0.60 0.65 0.70 Mean variance of mentioned topics Figure 4: Mean variance of preference values of self-declared topics when changing the threshold for the number of self-declared topics. declared topics. In the figure, the x-axis represents a threshold θ, which filters out users whose statements of topic preferences are no greater than θ topics. We observe that the mean variance increased as we focused on vocal users. Overall, these results demonstrate the usefulness of user and topic vectors in predicting missing preferences. Table 1 shows examples in which missing preferences of two users were predicted from known statements of agreements and disagreements7. In the table, predicted topics are accompanied by the corresponding ˆru,t value in parentheses. As an example, our proposed method predicted that the user A, who is positive toward regime change but negative toward Okinawa US military base, may also be positive toward vote of non-confidence to Cabinet but negative toward construction of a new base. 4.3 Inter-topic Preferences Do the topic vectors obtained by matrix factorization capture inter-topic preferences, such as “People who agree with A also agree with B”? Because no dataset exists for this evaluation, we created a dataset of pairwise inter-topic preferences by using a crowdsourcing service8. Sampling topic pairs randomly, we collected 150 topic pairs whose cosine similarities of topic vectors 7We anonymized user names in these examples. In addition, we removed topics that are too discriminatory or aggressive to other countries and races. Even though the experimental results of this paper do not necessarily reflect our idea, we do not think it is a good idea to distribute politically incorrect ideas through this paper. 8We used Yahoo! Crowdsourcing, a Japanese online service for crowdsourcing. http://crowdsourcing.yahoo.co.jp/ were below −0.6, 150 pairs whose cosine similarities were between −0.6 and 0.6, and 150 pairs whose cosine similarities were above 0.6. In this way, we obtained 450 topic pairs for evaluation. Given a pair of topics A and B, a crowd worker was asked to choose a label from the following three options: (a) those who agree/disagree with topic A may also agree/disagree with topic B; (b) those who agree/disagree with topic A may conversely disagree/agree with topic B; (c) otherwise (no association between A and B). Creating twenty pairs of topics as gold data, we removed labeling results from workers whose accuracy is less than 90%. Consequently, we obtained 6–10 human judgements for every topic pair. Regarding (a) as +1 point, (b) as −1 point, and (c) as 0 point, we computed the mean of the points (i.e., average human judgements) for each topic pair. Spearman’s rank correlation coefficient (ρ) between cosine similarity values of topic vectors and human judgements was 0.2210. We could observe a moderate correlation even though inter-topic preferences collected in this manner were highly subjective. In addition to the quantitative evaluation, as summarized in Table 2, we also checked similar topics for three controversial topics, Liberal Democratic Party (LDP), constitutional amendment and right of foreigners to vote (Table 2). Topics similar to LDP included synonymous ones (e.g., Abe’s LDP and Abe administration) and other topics promoted by the LDP (e.g., resuming nuclear power plant operations, bus rapid transit (BRT) and hate speech countermeasure law). Considering that people who support the LDP may also tend to favor its policies, we found these results reasonable. As for the other example, constitutional amendment had a feature vector that was similar to that of amendment of Article 9, enforcement of specific secret protection law and security related law. From these results, we concluded that topic vectors were able to capture inter-topic preferences. 5 Related Work In this section, we summarize the related work that spreads across various research fields. Social Science and Political Science A number of of studies analyze social phenomena regarding political activities, political thoughts, and public opinions on social media. These studies 403 User Type Topic A Agreement (declared) regime change, capital relocation Disagreement (declared) Okinawa US military base, nuclear weapons, TPP, Abe Cabinet, Abe government, nuclear cycle, right to collective defense, nuclear power plant, Abenomics Agreement (predicted) same-sex partnership ordinance (0.9697), vote of non-confidence to Cabinet (0.9248), national people’s government (0.9157), abolition of tax (0.8978) Disagreement (predicted) steamrollering war bill (-1.0522), worsening dispatch law (-1.0301), Sendai nuclear power plant (-1.0269), war bill (-1.0190), construction of a new base (-1.0186), Abe administration (-1.0173), landfill Henoko (-1.0158), unreasonable arrest (-1.0113) B Agreement (declared) visit shrine, marriage Disagreement(declared) tax increase, conscription, amend Article 9 Agreement (predicted) national people’s government (0.8467), abolition of tax (0.8300), same-sex partnership ordinance (0.7700), security bills (0.6736) Disagreement (predicted) corporate tax cuts (-1.0439), Liberal Democratic Party’s draft constitution (-1.0396), radioactivity (-1.0276), rubble (-1.0159), nuclear cycle (-1.0143) Table 1: Examples of agreement/disagreement topics predicted for two sample users A and B, with predicted score ˆru,v shown in parenthesis. Topic Topics with a high degree of cosine similarity Liberal Democratic Party (LDP) Abe’s LDP (0.3937), resuming nuclear power plant operations (0.3765), bus rapid transit (BRT) (0.3410), hate speech countermeasure law (0.3373), Henoko relocation (0.3353), C-130 (0.3338), Abe administration (0.3248), LDP & Komeito (0.2898), Prime Minister Abe (0.2835) constitutional amendment amendment of Article 9 (0.4520), enforcement of specific secret protection law (0.4399), security related law (0.4242), specific confidentiality protection law (0.4022), security bill amendment (0.3977), defense forces (0.3962), my number law (0.3874), collective self-defense rights (0.3687), militarist revival (0.3567) right of foreigners to vote human rights law (0.5405), anti-discrimination law (0.5376), hate speech countermeasure law (0.5080), foreigner’s life protection (0.4553), immigration refugee (0.4520), co-organized Olympics (0.4379) Table 2: Topics identified as being similar to the three controversial topics shown in the left column. model the political spectrum from liberal to conservative (Adamic and Glance, 2005; Zhou et al., 2011; Cohen and Ruths, 2013; Bakshy et al., 2015; Wong et al., 2016), political parties (Tumasjan et al., 2010; Boutet et al., 2013; Makazhanov and Rafiei, 2013), and elections (O’Connor et al., 2010; Conover et al., 2011). Employing a single axis (e.g., liberal to conservative) or a few axes (e.g., political parties and candidates of elections), these studies provide intuitive visualizations and interpretations along the respective axes. In contrast, this study is the first attempt to recognize and organize various axes of topics on social media with no prior assumptions regarding the axes. Therefore, we think our study provides a new tool for computational social science and political science that enables researchers to analyze and interpret phenomena on social media. Next, we describe previous research focused on acquiring lexical knowledge of politics. Sim et al. (2013) measured ideological positions of candidates in US presidential elections from their speeches. The study first constructs “cue lexicons” from political writings labeled with ideologies by domain experts, using sparse additive generative models (Eisenstein et al., 2011). These constructed cue lexicons were associated with such ideologies as left, center, and right. Representing each speech of a candidate with cue lexicons, they inferred the proportions of ideologies of the candidate. The study requires a predefined set of labels and text data associated with the labels. Bamman and Smith (2015) presented an unsupervised method for assessing the political stance of a proposition, such as “global warming is a hoax,” along the political spectrum of liberal to conservative. In their work, a proposition was represented by a tuple in the form ⟨subject, predicate⟩, for example, ⟨global warming, hoax⟩. They presented a generative model for users, subjects, and predicates to find a one-dimensional latent space that corresponded to the political spectrum. Similar to our present work, their work (Bamman and Smith, 2015) did not require labeled data 404 to map users and topics (i.e., subjects) onto a latent feature space. In their paper, they reported that the generative model outperformed Principal Component Analysis (PCA), which is a method for matrix factorization. Empirical results here probably reflected the underlying assumptions that PCA treats missing elements as zero and not as missing data. In contrast, in the present work, we properly distinguish missing values from zero, excluding missing elements of the original matrix from the objective function of Equation 2. Further, this work demonstrated the usefulness of the latent space, that is, topic and user vectors, in predicting missing topic preferences of users and inter-topic preferences. Fine-grained Opinion Analysis The method presented in Section 2 is an instance of finegrained opinion analysis (Wiebe et al., 2005; Choi et al., 2006; Johansson and Moschitti, 2010; Yang and Cardie, 2013; Deng and Wiebe, 2015), which extracts a tuple of a subjective opinion, a holder of the opinion, and a target of the opinion from text. Although these previous studies have the potential to improve the quality of the user-topic matrix R, unfortunately, no corpus or resource is available for the Japanese language. We do not currently have a large collection of English tweets, but combining fine-grained opinion analysis with matrix factorization is an immediate future work. Causality Relation Some of inter-topic preferences in this work can be explained by causality relation, for example, “TPP promotes free trade.” A number of previous studies acquire instances of causal relation (Girju, 2003; Do et al., 2011) and promote/suppress relation (Hashimoto et al., 2012; Fluck et al., 2015) from text. The causality knowledge is useful for predicting (hypotheses of) future events (Radinsky et al., 2012; Radinsky and Davidovich, 2012; Hashimoto et al., 2015). Inter-topic preferences, however, also include pairs of topics in which causality relation hardly holds. As an example, it is unreasonable to infer that nuclear plant and railroading of bills have a causal relation, but those who dislike nuclear plant also oppose railroading of bills because presumably they think the governing political parties rush the bill for resuming a nuclear plant. In this study, we model these inter-topic preferences based on preferences of the public. That said, we have as a promising future direction of our work plans to incorporate approaches to acquire causality knowledge. 6 Conclusion In this paper, we presented a novel approach for modeling inter-topic preferences of users on Twitter. Designing linguistic patterns for identifying support and opposition statements, we extracted users’ preferences regarding various topics from a large collection of tweets. We formalized the task of modeling inter-topic preferences as a matrix factorization that maps both users and topics onto a latent feature space that abstracts users’ preferences. Through our experimental results, we demonstrated that our approach was able to accurately predict missing topic preferences of users (80–94%) and that our latent vector representations of topics properly encoded inter-topic preferences. For our immediate future work, we plan to embed the topic and user vectors to create a crosstopic stance detector. It is possible to generalize our work to model heterogeneous signals, such as interests and behaviors of people, for example, “those who are interested in A also support B,” and “those who favor A also vote for B”. Therefore, we believe that our work will bring about new applications in the field of NLP and other disciplines. Acknowledgements This work was supported by JSPS KAKENHI Grant Number 15H05318 and JST CREST Grant Number J130002054, Japan. References Lada A. Adamic and Natalie Glance. 2005. The political blogosphere and the 2004 U.S. election: Divided they blog. In Proceedings of the 3rd International Workshop on Link Discovery (LinkKDD 2005). pages 36–43. https://doi.org/10.1145/1134271.1134277. Pranav Anand, Marilyn Walker, Rob Abbott, Jean E. Fox Tree, Robeson Bowmani, and Michael Minor. 2011. Cats rule and dogs drool!: Classifying stance in online debate. In Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2011). pages 1–9. Eytan Bakshy, Solomon Messing, and Lada A. Adamic. 2015. Exposure to ideologically diverse news and opinion on face405 book. Science 348(6239):1130–1132. https://doi.org/10.1126/science.aaa1160. David Bamman and Noah A. Smith. 2015. Open extraction of fine-grained political statements. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015). pages 76–85. https://doi.org/10.18653/v1/D15-1008. Robert M. Bell and Yehuda Koren. 2007. Lessons from the Netflix prize challenge. ACM SIGKDD Explorations Newsletter 9(2):75–79. https://doi.org/10.1145/1345448.1345465. Antoine Boutet, Hyoungshick Kim, and Eiko Yoneki. 2013. What’s in Twitter, I know what parties are popular and who you are supporting now! Social Network Analysis and Mining (SNAM 2012) 3(4):1379–1391. https://doi.org/10.1109/ASONAM.2012.32. Wei-Sheng Chin, Yong Zhuang, Yu-Chin Juan, and Chih-Jen Lin. 2015. A fast parallel stochastic gradient method for matrix factorization in shared memory systems. ACM Transactions on Intelligent Systems and Technology (TIST) 6(1):2. https://doi.org/10.1145/2668133. Yejin Choi, Eric Breck, and Claire Cardie. 2006. Joint extraction of entities and relations for opinion recognition. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006). pages 431–439. http://aclweb.org/anthology/W06-1651. Raviv Cohen and Derek Ruths. 2013. Classifying political orientation on Twitter: It’s not easy! In Proc. of the Seventh International AAAI Conference on Weblogs and Social Media (ICWSM 2013). pages 91–99. Michael D Conover, Bruno Gonc¸alves, Jacob Ratkiewicz, Alessandro Flammini, and Filippo Menczer. 2011. Predicting the political alignment of twitter users. In Privacy, 2011 IEEE Third International Conference on Security, Risk and Trust and 2011 IEEE Third Inernational Conference on Social Computing (PASSAT-SocialCom 2011). IEEE, pages 192–199. Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Enhanced sentiment learning using twitter hashtags and smileys. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010). pages 241–249. http://aclweb.org/anthology/C10-2028. Lingjia Deng and Janyce Wiebe. 2015. MPQA 3.0: An entity/event-level sentiment corpus. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2015). pages 1323–1328. https://doi.org/10.3115/v1/N15-1146. Quang Do, Yee Seng Chan, and Dan Roth. 2011. Minimally supervised event causality identification. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011). pages 294–303. http://aclweb.org/anthology/D11-1027. Gideon Dror, Noam Koenigstein, Yehuda Koren, and Markus Weimer. 2011. The Yahoo! Music dataset and KDD-Cup’11. In Proceedings of the 2011 International Conference on KDD Cup 2011 (KDDCUP 2011). pages 3–18. Jacob Eisenstein, Amr Ahmed, and Eric P Xing. 2011. Sparse additive generative models of text. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011). Juliane Fluck, Sumit Madan, Tilia Renate Ellendorff, Theo Mevissen, Simon Clematide, Adrian van der Lek, and Fabio Rinaldi. 2015. Track 4 overview: Extraction of causal network information in biological expression language (BEL). In Proceedings of the Fifth BioCreative Challenge Evaluation Workshop. pages 333–346. Roxana Girju. 2003. Automatic detection of causal relations for question answering. In Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering - Volume 12. pages 76–83. https://doi.org/10.3115/1119312.1119322. Jeffrey Gottfried and Elisa Shearer. 2016. News use across social media platforms 2016. Technical report, Pew Research Center. Chikara Hashimoto, Kentaro Torisawa, Stijn De Saeger, Jong-Hoon Oh, and Jun’ichi Kazama. 2012. Excitatory or inhibitory: A new semantic orientation extracts contradiction and causality from the web. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012). Association for Computational Linguistics, pages 619–630. http://aclweb.org/anthology/D12-1057. Chikara Hashimoto, Kentaro Torisawa, Julien Kloetzer, and Jong-Hoon Oh. 2015. Generating event causality hypotheses through semantic relations. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI 2015). pages 2396– 2403. Kathleen Hall Jamieson and Joseph N. Cappella. 2008. Echo Chamber: Rush Limbaugh and the Conservative Media Establishment. Oxford University Press. Richard Johansson and Alessandro Moschitti. 2010. Syntactic and semantic structure for opinion expression detection. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL 2010). pages 67–76. http://aclweb.org/anthology/W10-2910. 406 Kristen Johnson and Dan Goldwasser. 2016. “All I know about politics is what I read in Twitter”: Weakly supervised models for extracting politicians’ stances from twitter. In Proceedings of the 26th International Conference on Computational Linguistics (COLING 2016). pages 2966– 2977. http://aclweb.org/anthology/C16-1279. Fred N. Kerlinger. 1984. Liberalism and Conservatism: The Nature and Structure of Social Attitudes. Lawrence Erlbaum Associates. Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer 42(8):30–37. https://doi.org/10.1109/MC.2009.263. William S. Maddox and Stuart A. Lilie. 1984. Beyond Liberal and Conservative: Reassessing the Political Spectrum. Cato Inst. Aibek Makazhanov and Davood Rafiei. 2013. Predicting political preference of twitter users. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2013). pages 298–305. https://doi.org/10.1145/2492517.2492527. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. Semeval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). pages 31–41. https://doi.org/10.18653/v1/S16-1003. Akiko Murakami and Rudy Raymond. 2010. Support or oppose?: classifying positions in online debates from reply activities and opinion expressions. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010). pages 869–875. http://aclweb.org/anthology/C10-2100. Brendan O’Connor, Ramnath Balasubramanyan, Bryan R. Routledge, and Noah A. Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. In Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media (ICWSM 2010). pages 122–129. Eli Pariser. 2011. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Penguin Books. Ashequl Qadir and Ellen Riloff. 2014. Learning emotion indicators from tweets: Hashtags, hashtag patterns, and phrases. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014). pages 1203– 1209. https://doi.org/10.3115/v1/D14-1127. Kira Radinsky and Sagie Davidovich. 2012. Learning to predict from textual data. Journal of Artificial Intelligence Research (JAIR) 45(1):641–684. Kira Radinsky, Sagie Davidovich, and Shaul Markovitch. 2012. Learning causality for news events prediction. In Proceedings of the 21st International Conference on World Wide Web (WWW 2012). pages 909–918. https://doi.org/10.1145/2187836.2187958. Yanchuan Sim, Brice D. L. Acree, Justin H. Gross, and Noah A. Smith. 2013. Measuring ideological proportions in political speeches. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013). pages 91– 101. http://aclweb.org/anthology/D13-1010. Swapna Somasundaran and Janyce Wiebe. 2009. Recognizing stances in online debates. In Joint conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP 2009). pages 226– 234. http://aclweb.org/anthology/P09-1026. Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006). pages 327–335. http://aclweb.org/anthology/W061639. Andranik Tumasjan, Timm Oliver Sprenger, Philipp G Sandner, and Isabell M Welpe. 2010. Predicting elections with twitter: What 140 characters reveal about political sentiment. In Fourth International AAAI Conference on Weblogs and Social Media (ICWSM 2010). pages 178–185. Cynthia Van Hee, Els Lefever, and Veronique Hoste. 2016. Monday mornings are my fave :) #not exploring the automatic recognition of irony in english tweets. In Proceedings of the 26th International Conference on Computational Linguistics (COLING 2016). pages 2730–2739. http://aclweb.org/anthology/C16-1257. Marilyn A. Walker, Pranav Anand, Rob Abbott, Jean E. Fox Tree, Craig Martell, and Joseph King. 2012. That is your evidence?: Classifying stance in online political debate. Decision Support Systems 53(4):719–729. https://doi.org/10.1016/j.dss.2012.05.032. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation 39(2):165–210. https://doi.org/10.1007/s10579-005-7880-9. Felix Ming Fai Wong, Chee Wei Tan, Soumya Sen, and Mung Chiang. 2016. Quantifying political leaning from tweets, retweets, and retweeters. IEEE Transactions on Knowledge and Data Engineering 28(8):2158–2172. https://doi.org/10.1109/TKDE.2016.2553667. 407 Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013). pages 1640–1649. http://aclweb.org/anthology/P131161. Daniel Xiaodan Zhou, Paul Resnick, and Qiaozhu Mei. 2011. Classifying the political leaning of news articles and users from user votes. In Fifth International AAAI Conference on Weblogs and Social Media (ICWSM 2011). pages 417–424. 408
2017
37
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 409–419 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1038 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 409–419 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1038 Automatically Labeled Data Generation for Large Scale Event Extraction Yubo Chen1,2, Shulin Liu1,2, Xiang Zhang1, Kang Liu1 and Jun Zhao1,2 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China 2 University of Chinese Academy of Sciences, Beijing, 100049, China {yubo.chen, shulin.liu, xiang.zhang, kliu, jzhao}@nlpr.ia.ac.cn Abstract Modern models of event extraction for tasks like ACE are based on supervised learning of events from small hand-labeled data. However, hand-labeled training data is expensive to produce, in low coverage of event types, and limited in size, which makes supervised methods hard to extract large scale of events for knowledge base population. To solve the data labeling problem, we propose to automatically label training data for event extraction via world knowledge and linguistic knowledge, which can detect key arguments and trigger words for each event type and employ them to label events in texts automatically. The experimental results show that the quality of our large scale automatically labeled data is competitive with elaborately human-labeled data. And our automatically labeled data can incorporate with human-labeled data, then improve the performance of models learned from these data. 1 Introduction Event Extraction (EE), a challenging task in Information Extraction, aims at detecting and typing events (Event Detection), and extracting arguments with different roles (Argument Identification) from natural-language texts. For example, in the sentence shown in Figure 1, an EE system is expected to identify an Attack event triggered by threw and extract the corresponding five augments with different roles: Yesterday (Role=Time), demonstrators (Role=Attacker), stones (Role=Instrument), soldiers (Role=Target), and Israeli (Role=Place). To this end, so far most methods (Nguyen et al., Michelle Obama and Barack Obama were on October 3, 1992. Marry Person Person Time Figure 1: This sentence expresses an Attack event triggered by threw and containing five arguments. 2016; Chen et al., 2015; Li et al., 2014; Hong et al., 2011; Ji and Grishman, 2008) usually adopted supervised learning paradigm which relies on elaborate human-annotated data, such as ACE 20051, to train extractors. Although this paradigm was widely studied, existing approaches still suffer from high costs for manually labeling training data and low coverage of predefined event types. In ACE 2005, all 33 event types are manually predefined and the corresponding event information (including triggers, event types, arguments and their roles) are manually annotated only in 599 English documents since the annotation process is extremely expensive. As Figure 2 shown, nearly 60% of event types in ACE 2005 have less than 100 labeled samples and there are even three event types which have less than ten labeled samples. Moreover, those predefined 33 event types are in low coverage for Natural Language Processing (NLP) applications on large-scale data. Therefore, for extracting large scale events, especially in open domain scenarios, how to automatically and efficiently generate sufficient training data is an important problem. This paper aims to automatically generate training data for EE, which involves labeling triggers, event types, arguments and their roles. Figure 1 shows an example of labeled sentence. Recent improvements of Distant Supervision (DS) have been proven to be effective to label training data for Relation Extraction (RE), which aims to predict semantic re1http://projects.ldc.upenn.edu/ace/ 409 0 200 400 600 800 1000 1200 1400 1600 Figure 2: Statistics of ACE 2005 English Data. lations between pairs of entities, formulated as (entity1, relation, entity2). And DS for RE assumes that if two entities have a relationship in a known knowledge base, then all sentences that mention these two entities will express that relationship in some way (Mintz et al., 2009). However, when we use DS for RE to EE, we meet following challenges: Triggers are not given out in existing knowledge bases. EE aims to detect an event instance of a specific type and extract their arguments and roles, formulated as (event instance, event type; role1, argument1; role2, argument2; ...; rolen, argumentn), which can be regarded as a kind of multiple or complicated relational data. In Figure 3, the right part shows an example of spouse of relation between Barack Obama and Michelle Obama, where two rectangles represent two entities and the edge connecting them represents their relation. DS for RE uses two entities to automatically label training data; In comparison, the left part in Figure 3 shows a marriage event of Barack Obama and Michelle Obama, where the dash circle represents the marriage event instance of Barack Obama and Michelle Obama, rectangles represent arguments of the event instance, and each edge connecting an argument and the event instance expresses the role of the argument. For example, Barack Obama plays a Spouse role in this marriage event instance. It seems that we could use an event instance and an argument to automatically generate training data for argument identification just like DS for RE. However, an event instance is a virtual node in existing knowledge bases and mentioned implicitly in texts. For example, in Freebase, the aforementioned marriage event instance is represented as m.02nqglv (see details in Section 2). Thus we cannot directly use an event instance and an argument, like m.02nqglv and Barack Obama, to label back Marriage Michelle Obama 10/03/1992 Trinity United Church of Christ Null Spouse Spouse location of ceremony time_from time_to Barack Obama Michelle Obama Spouse_of An example of marriage event An example of spouse_of relation Michelle Obama Figure 3: A comparison of events and relations. in sentences. In ACE event extraction program, an event instance is represented as a trigger word, which is the main word that most clearly represents an event occurrence in sentences, like threw in Figure 1. Following ACE, we can use trigger words to represent event instance, like married for people.marriage event instance. Unfortunately, triggers are not given out in existing knowledge bases. To resolve the trigger missing problem mentioned above, we need to discover trigger words before employing distant supervision to automatically label event arguments. Following DS in RE, we could naturally assume that a sentence contains all arguments of an event in the knowledge base tend to express that event, and the verbs occur in these sentences tend to evoke this type of events. However, arguments for a specific event instance are usually mentioned in multiple sentences. Simply employing all arguments in the knowledge base to label back in sentences will generate few sentences as training samples. As shown in Table 1, only 0.02% of instances can find all argument mentions in one sentence. Event Type EI# A# S# education.education 530,538 8 0 film.film crew gig 252,948 3 8 people.marriage 152,276 5 0 ... ... ... ... military.military service 27,933 6 0 olympics.olympic medal honor 20,790 5 4 sum of the selected 21 events 3,870,492 100 798 Table 1: Statistics of events in Freebase. EI# denotes number of event instances in Freebase. A# denotes number of arguments for each event types, and S# indicates number of sentences contain all arguments of each event type in Wikipedia. To solve above problems, we propose an approach to automatically generate labeled data for large scale EE by jointly using world knowledge (Freebase) and linguistic knowledge (FrameNet). At first, we put forward an approach to prioritize 410 arguments and select key or representative arguments (see details in Section 3.1) for each event type by using Freebase; Secondly, we merely use key arguments to label events and figure out trigger words; Thirdly, an external linguistic knowledge resource, FrameNet, is employed to filter noisy trigger words and expand more triggers; After that, we propose a Soft Distant Supervision (SDS) for EE to automatically label training data, which assumes that any sentence containing all key arguments in Freebase and a corresponding trigger word is likely to express that event in some way, and arguments occurring in that sentence are likely to play the corresponding roles in that event. Finally, we evaluate the quality of the automatically labeled training data by both manual and automatic evaluations. In addition, we employ a CNNbased EE approach with multi-instance learning for the automatically labeled data as a baseline for further research on this data. In summary, the contributions of this paper are as follows: • To our knowledge, it is the first work to automatically label data for large scale EE via world knowledge and linguistic knowledge. All the labeled data in this paper have been released and can be downloaded freely2. • We propose an approach to figure out key arguments of an event by using Freebase, and use them to automatically detect events and corresponding trigger words. Moreover, we employ FrameNet to filter noisy triggers and expand more triggers. • The experimental results show that the quality of our large scale automatically labeled data is competitive with elaborately humanannotated data. Also, our automatically labeled data can augment traditional humanannotated data, which could significantly improve the extraction performance. 2 Background In this paper, we respectively use Freebase as our world knowledge containing event instance and FrameNet as the linguistic knowledge containing trigger information. The articles in Wikipedia are used as unstructured texts to be labeled. To understand our method easily, we first introduce them as follows: 2https://github.com/acl2017submission/event-data Freebase is a semantic knowledge base (Bollacker et al., 2008), which makes use of mediators (also called compound value types, CVTs) to merge multiple values into a single value. As shown in Figure 3, people.marriage is one type of CVTs. There are many instances of people.marriage and the marriage of Barack Obama and Michelle Obama is numbered as m.02nqglv. Spouse, from, to and location of ceremony are roles of the people.marriage CVTs. Barack Obama, Michelle Obama, 10/3/1992 and Trinity United Church of Christ are the values of the instances. In this paper, we regard these CVTs as events, type of CVTs as event type, CVT instances as event instances, values in CVTs as arguments in events and roles of CVTs as the roles of arguments play in the event, respectively. According to the statistics of the Freebase released on 23th April, 2015, there are around 1885 CVTs and around 14 million CVTs instances. After filtering out useless and meaningless CVTs, such as CVTs about user profiles and website information, we select 21 types of CVTs with around 3.8 million instances for experiments, which mainly involves events about education, military, sports and so on. FrameNet3 is a linguistic resource storing information about lexical and predicate argument semantics (Baker et al., 1998). FrameNet contains more than 1, 000 frames and 10, 000 Lexical Units (LUs). Each frame of FrameNet can be taken as a semantic frame of a type of events (Liu et al., 2016). Each frame has a set of lemmas with part of speech tags that can evoke the frame, which are called LUs. For example, appoint.v is a LU of Appointing frame in FrameNet, which can be mapped to people.appointment events in Freebase. And a LUs of the frame plays a similar role as the trigger of an event. Thus we use FrameNet to detect triggers in our automatically data labeling process. Wikipedia4 that we used was released on January, 2016. All 6.3 million articles in it are used in our experiments. We use Wikipedia because it is relatively up-to-date, and much of the information in Freebase is derived from Wikipedia. 3 Method of Generating Training Data Figure 4 describes the architecture of automatically labeling data, which primarily involves the following four components: (i) Key argument de3http://framenet.icsi.berkeley.edu 4https://www.wikipedia.org/ 411 Figure 4: The architecture of automatically labeling training data for large scale event extraction. tection, which prioritizes arguments of each event type and selects key arguments for each type of event; (ii) Trigger word detection, which uses key arguments to label sentences that may express events preliminarily, and then detect triggers; (iii) Trigger word filtering and expansion, which uses FrameNet to filter noisy triggers and expand triggers; (iv) Automatically labeled data generation, which uses a SDS to label events in sentences. 3.1 Key Argument Detection This section illustrates how to detect key arguments for each event type via Freebase. Intuitively, arguments of a type of event play different roles. Some arguments play indispensable roles in an event, and serve as vital clues when distinguishing different events. For example, compared with arguments like time, location and so on, spouses are key arguments in a marriage event. We call these arguments as key arguments. We propose to use Key Rate (KR) to estimate the importance of an argument to a type of event, which is decided by two factors: Role Saliency and Event Relevance. Role Saliency (RS) reflects the saliency of an argument to represent a specific event instance of a given event type. If we tend to use an argument to distinguish one event instance form other instances of a given event type, this argument will play a salient role in the given event type. We define RS as follows: RSij = Count(Ai, ETj) Count(ETj) (1) where RSij is the role saliency of i-th argument to j-th event type, Count(Ai , ETj) is the number of Arguemnti occurring in all instances of eventTypej in Freebase and Count(ETj) is the number of instances of eventTypej in Freebase. Event Relevance (ER) reflects the ability in which an argument can be used to discriminate different event types. If an argument occurs in every event type, the argument will have a low event relevance. We propose to compute ER as follows: ERi = log Sum(ET) 1 + Count(ETCi) (2) where ERi is the event relevance of i-th argument, Sum (ET) is the number of all event types in knowledge base and Count(ETCi) is the number of event types containing i-th argument. Finally, KR is computed as follows: KRij = RSij ∗ERi (3) We compute KR for all arguments of each event type, and sort them according to KR. Then we choose top K arguments as key arguments. 3.2 Trigger Word Detection After detecting key arguments for every event types, we use these key arguments to label sentences that may express events in Wikipedia. At first, we use Standford CoreNLP tool5 to converts the raw Wikipedia texts into a sequence of sentences, attaches NLP annotations (POS tag, NER tag). Finally, we select sentences that contains all key arguments of an event instance in Freebase as sentences expressing corresponding events. Then we use these labeled sentences to detect triggers. In a sentence, a verb tend to express an occurrence of an event. For example, in ACE 2005 English data, there are 60% of events triggered by verbs. As shown in Figure 1, threw is a trigger of Attack event. Intuitively, if a verb occurs more times than other verbs in the labeled sentences of one event type , the verb tends to trigger this type of event; and if a verb occurs in sentences of every event types, like is, the verb will have a low probability to trigger events. Thus we propose Trigger Candidate Frequency (TCF) and Trigger Event Type Frequency (TETF) to evaluate above two aspects. Finally we employ Trigger Rate (TR), which is the product of TCF and TETF to estimate the probability of a verb to be a trigger, which is formulated as follows: TRij = TCFij ∗TETFi (4) TCFij = Count(Vi, ETSj) Count(ETSj) (5) 5http://stanfordnlp.github.io/CoreNLP/ 412 TETFi = log Sum(ET) 1 + Count(ETIi) (6) where TRij is the trigger rate of i-th verb to jth event type, Count(Vi, ETSj) is the number of sentences, which express j-th type of event and contain i-th verb, Count(ETSj) is the number of sentences expressing j-th event type, Count(ETIi) is the number of event types, which have the labeled sentences containing i-th verb. Finally, we choose verbs with high TR values as the trigger words for each event type. 3.3 Trigger Word Filtering and Expansion We can obtain an initial verbal trigger lexicon by above trigger word detection. However, this initial trigger lexicon is noisy and merely contains verbal triggers. The nominal triggers like marriage are missing. Because the number of nouns in one sentence is usually larger than that of verbs, it is hard to use TR to find nominal triggers. Thus, we propose to use linguistic resource FrameNet to filter noisy verbal triggers and expand nominal triggers. As the success of word embedding in capturing semantics of words (Turian et al., 2010), we employ word embedding to map the events in Freebase to frames in FrameNet. Specifically, we use the average word embedding of all words in i-th Freebase event type name ei and word embedding of k-th lexical units of j-th frame ej,k to compute the semantic similarity. Finally, we select the frame contains max similarity of ei and ej,k as the mapped frame, which can be formulated as follows: frame(i) = arg max j (similarity(ei, ej,k)) (7) Then, we filter the verb, which is in initial verbal trigger word lexicon and not in the mapping frame. And we use nouns with high confidence in the mapped frame to expand trigger lexicon. 3.4 Automatically labeled data generation Finally, we propose a Soft Distant Supervision and use it to automatically generate training data, which assumes that any sentence containing all key arguments in Freebase and a corresponding trigger word is likely to express that event in some way, and arguments occurring in that sentence are likely to play the corresponding roles in that event. 4 Method of Event Extraction In this paper, event extraction is formulated as a two-stage, multi-class classification task. The first stage is called Event Classification, which aims to predict whether the key argument candidates participate in a Freebase event. If the key arguments participate a Freebase event, the second stage is conducted, which aims to assign arguments to the event and identify their corresponding roles. We call this stage as argument classification. We employ two similar Dynamic Multi-pooling Convolutional Neural Networks with Multi-instance Learning (DMCNNs-MIL) for above two stages. The Dynamic Multi-pooling Convolutional Neural Networks (DMCNNs) is the best reported CNN-based model for event extraction (Chen et al., 2015) by using human-annotated training data. However, our automatically labeled data face a noise problem, which is a intrinsic problem of using DS to construct training data (Hoffmann et al., 2011; Surdeanu et al., 2012). In order to alleviate the wrong label problem, we use Multi-instance Learning (MIL) for two DMCNNs. Because the second stage is more complicated and limited in space, we take the MIL used in arguments classification as an example and describes as follows: We define all of the parameters for the stage of argument classification to be trained in DMCNNs as θ. Suppose that there are T bags {M1, M2, ..., MT } and that the i-th bag contains qi instances (sentences) Mi =  m1 i , m2 i , ..., mqi i , the objective of multi-instance learning is to predict the labels of the unseen bags. In stage of argument classification, we take sentences containing the same argument candidate and triggers with a same event type as a bag and all instances in a bag are considered independently. Given an input instance mj i, the network with the parameter θ outputs a vector O, where the r-th component Or corresponds to the score associated with argument role r. To obtain the conditional probability p(r|mj i, θ), we apply a softmax operation over all argument role types: p(r|mj i, θ) = eor nP k=1 eok (8) where, n is the number of roles. And the objective of multi-instance learning is to discriminate bags rather than instances. Thus, we define the objective function on the bags. Given all (T) training bags (Mi, yi), we can define the objective function 413 Event Type Freebase Size Sentences (KA) Sentences (KA+T) Examples of argument roles sorted by KR Examples of triggers people.marriage 152,276 56,837 26,349 spouse, spouse, from, to, location marriage, marry, wed, wedding, couple,..., wife music.group membership 239,813 90,617 20,742 group, member, start, role, end musician, singer, sing, sang, sung, concert,..., play education.education 530,538 26,966 11,849 student, institution, degree,..., minor educate, education, graduate, learn, study,..., student organization.leadership 43,610 5,429 3,416 organization, person, title,..., to CEO, charge, administer, govern, rule, boss,..., chair olympics.olympic medal honor 20,790 4,056 2,605 medalist, olympics, event,..., country win, winner, tie, victor, gold, silver,..., bronze ... ... ... ... ... ... sum of 21 selected events 3,870,492 421,602 72,611 argument1, argument2 ,..., argumentN trigger1, trigger2, trigger3, ... , triggerN Table 2: The statistics of five largest automatically labeled events in selected 21 Freebase events, with their size of instances in Freebase, sentences labeled with key argument (KA) and KA + Triggers(T), examples of arguments roles sorted by KR and examples of triggers. using cross-entropy at the bag level as follows: J (θ) = T X i=1 log p(yi|mj i, θ) (9) where j is constrained as follows: j∗= arg max j p(r|mj i, θ) 1 ≤j ≤qi (10) To compute the network parameter θ, we maximize the log likelihood J (θ) through stochastic gradient descent over mini-batches with the Adadelta (Zeiler, 2012) update rule. 5 Experiments In this section, we first manually evaluate our automatically labeled data. Then, we conduct automatic evaluations for our labeled data based on ACE corpus and analyze effects of different approaches to automatically label training data. Finally, we shows the performance of DMCNNs-MIL on our automatically labeled data. 5.1 Our Automatically Labeled Data By using the proposed methods, a large set of labeled data could be generated automatically. Table 2 shows the statistics of the five largest automatically labeled events among selected 21 Freebase events. Two hyper parameters, the number of key arguments and the value of TR in our automatically data labeling, are set as 2 and 0.8, by grid search respectively. When we merely use two key arguments to label data, we will obtain 421, 602 labeled sentences. However, these sentences miss labeling triggers. Thus, we leverage these rough labeled data and FrameNet to find triggers and use SDS to generate labeled data. Finally, 72, 611 labeled sentences are generated automatically. Compared with nearly 6, 000 human annotated labeled sentence in ACE, our method can automatically generate large scale labeled training data. 5.2 Manual Evaluations of Labeled Data ##001 He is the uncle of [Amal Clooney], [wife] of the actor [George Clooney]. Trigger: wife Event Type: Marriage MannalAnotate[Y/N]: Argument: Amal Clooney Role:Spouse MannalAnotate[Y/N]: Argument: George Clooney Role:Spouse MannalAnotate[Y/N]: ##002 She was [married] to the cinematographer [Theo Nischwitz] and was sometimes credited as [Gertrud Hinz-Nischwitz]. Trigger: married Event Type: Marriage MannalAnotate[Y/N]: Argument: Theo Nischwitz Role:Spouse MannalAnotate[Y/N]: Argument: Gertrud Hinz-Nischwitz Role:Spouse MannalAnotate[Y/N]: Figure 5: Examples of manual evaluations. We firstly manually evaluate the precision of our automatically generated labeled data. We randomly select 500 samples from our automatically labeled data. Each selected sample is a sentence with a highlighted trigger, labeled arguments and corresponding event type and argument roles. Figure 5 gives some samples. Annotators are asked to assign one of two labels to each sample. “Y”: the word highlighted in the given sentence indeed triggers an event of the corresponding type or the word indeed plays the corresponding role in that event. Otherwise “N” is labeled. It is very easy to annotate a sample for annotators, thus the annotated results are expected to be of high quality. Each sample is independently annotated by three annotators6 (including one of the authors and two of our colleagues who are familiar with event extraction task) and the final decision is made by voting. Stage Average Precision Trigger Labeling 88.9 Argument Labeling 85.4 Table 3: Manual Evaluation Results We repeat above evaluation process on the final 72, 611 labeled data three times and the average precision is shown in Table 3. Our automatically generated data can achieve a precision of 88.9 and 85.4 for trigger labeling and argument labeling re6The inter-agreement rate is 87.5% 414 Methods Trigger Identification(%) Trigger Identification + Classification(%) Argument Identification(%) Argument Role(%) P R F P R F P R F P R F Li’s structure trained with ACE 76.9 65.0 70.4 73.7 62.3 67.5 69.8 47.9 56.8 64.7 44.4 52.7 Chen’s DMCNN trained with ACE 80.4 67.7 73.5 75.6 63.6 69.1 68.8 51.9 59.1 62.2 46.9 53.5 Nguyen’s JRNN trained with ACE 68.5 75.7 71.9 66.0 73.0 69.3 61.4 64.2 62.8 54.2 56.7 55.4 DMCNN trained with ED Only 77.6 67.7 72.3 72.9 63.7 68.0 64.9 51.7 57.6 58.7 46.7 52.0 DMCNN trained with ACE+ED 79.7 69.6 74.3 75.7 66.0 70.5 71.4 56.9 63.3 62.8 50.1 55.7 Table 4: Overall performance on ACE blind test data spectively, which demonstrates that our automatically labeled data is of high quality. 5.3 Automatic Evaluations of Labeled Data To prove the effectiveness of the proposed approach automatically, we add automatically generated labeled data into ACE dataset to expand the training sets and see whether the performance of the event extractor trained on such expanded training sets is improved. In our automatically labeled data, there are some event types that can correspond to those in ACE dataset. For example, our people.marriage events can be mapped to life.marry events in ACE2005 dataset. We mapped these types of events manually and we add them into ACE training corpus in two ways. (1) we delete the human annotated ACE data for these mapped event types in ACE dataset and add our automatically labeled data to remainder ACE training data. We call this Expanded Data (ED) as ED Only. (2) We directly add our automatically labeled data of mapped event types to ACE training data and we call this training data as ACE+ED. Then we use such data to train the same event extraction model (DMCNN) and evaluate them on the ACE testing data set. Following (Nguyen et al., 2016; Chen et al., 2015; Li et al., 2013), we used the same test set with 40 newswire articles and the same development set with 30 documents and the rest 529 documents are used for ACE training set. And we use the same evaluation metric P, R, F as ACE task defined. We select three baselines trained with ACE data. (1) Li’s structure, which is the best reported structured-based system (Li et al., 2013). (2) Chen’s DMCNN, which is the best reported CNN-based system (Chen et al., 2015). (3) Nguyen’s JRNN, which is the state-ofthe-arts system (Nguyen et al., 2016). The results are shown in Table 4. Compared with all models, DMCNN trained with ACE+ED achieves the highest performance. This demonstrates that our automatically generated labeled data could expand human annotated training data effectively. Moreover, compared with Chen’s DMCNN trained with ACE, DMCNN trained with ED Only achieves a competitive performance. This demonstrates that our large scale automatically labeled data is competitive with elaborately humanannotated data. 5.4 Discussion Impact of Key Rate In this section, we prove the effectiveness of KR to find key arguments and explore the impact of different numbers of key arguments to automatically generate data. We specifically select two methods as baselines for comparison with our KR method: ER and RS, which use the event relevance and role salience to sort arguments of each type of events respectively. Then we choose the same number of key arguments in all methods and use these key arguments to label data. After that we evaluate these methods by using above automatic evaluations based on ACE data. Results are shown in Table 5. ACE+KR achieve the best performance in both stages. This demonstrates the effectiveness of our KR methods. Feature Trigger Argument F1 F1 ACE 69.1 53.5 ACE + RS 70.1 55.3 ACE + ER 69.5 54.2 ACE + KR 70.5 55.7 Table 5: Effects of ER, RS and KR To explore the impact of different numbers of key arguments, we sort all arguments of each type of events according to KR value and select top k arguments as the key arguments. Examples are shown in Table 2. Then we automatically evaluate the performance by using automatic evaluations proposed above. Figure 6 shows the results, when we set k = 2, the method achieves a best 415 Figure 6: Effects of the number of key arguments performance in both stages. Then, the F1 value reduces as k grows. The reason is that the heuristics for data labeling are stricter as k grows. As a result, less training data is generated. For example, if k = 2, we will get 25, 797 sentences labeled as people.marriage events and we will get 534 labeled sentences, if k = 3. However, when we set k = 1, although more labeled data are generated, the precision could not be guaranteed. Impact of Trigger Rate and FrameNet In this section, we prove the effectiveness of TR and FrameNet to find triggers. We specifically select two methods as baselines: TCF and TETF. TCF, TETF and TR respectively use the trigger candidate frequency, trigger event type frequency and trigger rate to sort trigger candidates of each type of events. Then we generate initial trigger lexicon by using all trigger candidates with high TCF value, TETF value or TR value. We set these hyper parameters as 0.8, 0.9 and 0.8, respectively, which are determined by grid search from (0.5, 0.6, 0.7, 0.8, 0.9, 1.0). FrameNet was used to filter noisy verbal triggers and expand nominal triggers. Trigger examples generated by TR+Framenet are shown in Table 2. Then we evaluate the performance of these methods by using above automatic evaluations. Results are shown in Table 6, Compared with ACE+TCF and ACE+TETF, ACE+TR gains a higher performance in both stages. It demonstrates the effectiveness of our TR methods. When we use FrameNet to generate triggers, compared with ACE+TR, we get a 1.0 improvement on trigger classification and a 1.7 improvement on argument classification. Such improvements are higher than improvements gained by other methods (TCF, IEF, TR), which demonstrates the effectiveness of the usage of FrameNet. Feature Trigger Argument F1 F1 ACE 69.1 53.5 ACE + TCF 69.3 53.8 ACE + TETF 69.2 53.7 ACE + TR 69.5 54.0 ACE + TR + FrameNet 70.5 55.7 Table 6: Effects of TCF, TETF,TR and FrameNet 5.5 Performance of DMCNN-MIL Following previous work (Mintz et al., 2009) in distant supervised RE, we evaluate our method in two ways: held-out and manual evaluation. Held-out Evaluation In the held-out evaluation, we hold out part of the Freebase event data during training, and compare newly discovered event instances against this heldout data. We use the following criteria to judge the correctness of each predicted event automatically: (1) An event is correct if its key arguments and event type match those of an event instance in Freebase; (2) An argument is correctly classified if its event type and argument role match those of any of the argument instance in the corresponding Freebase event. Figure 7 and Figure 8 show the precision-recall (P-R) curves for each method in the two stages of event extraction respectively. We can see that multi-instance learning is effective to alleviate the noise problem in our distant supervised event extraction. Figure 7: P-R curves for event classification. Figure 8: P-R curves for argument classification. Human Evaluation Because the incomplete nature of Freebase, heldout evaluation suffers from false negatives problem. We also perform a manual evaluation to eliminate these problems. In the manual evaluation, we manually check the newly discovered event instances that are not in Freebase. Because the number of these event instances in the test data is unknown, we cannot calculate the recall in this case. 416 Instead, we calculate the precision of the top n extracted event instances. The human evaluation results are presented in Table 7. We can see that DMCNNs-MIL achieves the best performance. Methods Event Classificaiton Top 100 Top 300 Top 500 Average DMCNNs 58.7 54.3 52.9 55.3 DMCNNs+MIL 70.6 67.2 64.3 67.4 Methods Argument Classificaiton Top 100 Top 300 Top 500 Average DMCNNs 43.5 40.6 36.7 40.3 DMCNNs+MIL 50.8 45.6 43.5 46.6 Table 7: Precision for top 100, 300, and 500 events 6 Related Work Most of previous event extraction work focused on supervised learning paradigm and trained event extractors on human-annotated data which yield relatively high performance. (Ahn, 2006; Ji and Grishman, 2008; Hong et al., 2011; McClosky et al., 2011; Li et al., 2013, 2014; Chen et al., 2015; Nguyen and Grishman, 2015; Nguyen et al., 2016). However, these supervised methods depend on the quality of the training data and labeled training data is expensive to produce. Unsupervised methods can extract large numbers of events without using labeled data (Chambers and Jurafsky, 2011; Cheung et al., 2013; Huang et al., 2016). But extracted events may not be easy to be mapped to events for a particular knowledge base. Distant supervision have been used in relation extraction for automatically labeling training data (Mintz et al., 2009; Hinton et al., 2012; Krause et al., 2012; Krishnamurthy and Mitchell, 2012; Berant et al., 2013; Surdeanu et al., 2012; Zeng et al., 2015). But DS for RE cannot directly use for EE. For the reasons that an event is more complicated than a relation and the task of EE is more difficult than RE. The best reported supervised RE and EE system got a F1-score of 88.0% (Wang et al., 2016) and 55.4% (Nguyen et al., 2016) respectively. Reschke et al. (2014) extended the distant supervision approach to fill slots in plane crash. However, the method can only extract arguments of one plane crash type and need flight number strings as input. In other words, the approach cannot extract whole event with different types automatically. 7 Conclusion and Future Work In this paper, we present an approach to automatically label training data for EE. The experimental results show the quality of our large scale automatically labeled data is competitive with elaborately human-annotated data. Also, we provide a DMCNN-MIL model for this data as a baseline for further research. In the future, we will use the proposed automatically data labeling method to more event types and explore more models to extract events by using automatically labeled data. Acknowledgments This work was supported by the Natural Science Foundation of China (No. 61533018) and the National Basic Research Program of China (No. 2014CB340503). And this research work was also supported by Google through focused research awards program. References David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning About Time and Events. pages 1–8. http://dl.acm.org/citation.cfm?id=1629235.1629236. Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics. Association for Computational Linguistics, pages 86–90. http://aclweb.org/anthology/C98-1013. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1533–1544. http://aclweb.org/anthology/D13-1160. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. pages 1247–1250. http://doi.acm.org/10.1145/1376616.1376746. Nathanael Chambers and Dan Jurafsky. 2011. Template-based information extraction without the templates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 976–986. http://aclweb.org/anthology/P11-1098. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of the 53rd Annual Meet417 ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, pages 167–176. https://doi.org/10.3115/v1/P15-1017. Kit Jackie Chi Cheung, Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic frame induction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 837–846. http://aclweb.org/anthology/N13-1104. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580 https://arxiv.org/pdf/1207.0580. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and S. Daniel Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 541–550. http://aclweb.org/anthology/P111055. Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 1127–1136. http://aclweb.org/anthology/P11-1113. Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare R. Voss, Jiawei Han, and Avirup Sil. 2016. Liberal event extraction and event schema induction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 258–268. http://www.aclweb.org/anthology/P16-1025. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of ACL-08: HLT. Association for Computational Linguistics, pages 254–262. http://aclweb.org/anthology/P08-1030. Sebastian Krause, Hong Li, Hans Uszkoreit, and Feiyu Xu. 2012. Large-scale learning of relationextraction rules with distant supervision from the web. In Proceedings of International Semantic Web Conference, Springer, pages 263– 278. http://link.springer.com/chapter/10.1007/9783-642-35176-1 17. Jayant Krishnamurthy and Tom Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 754–765. http://aclweb.org/anthology/D12-1069. Qi Li, Heng Ji, Yu Hong, and Sujian Li. 2014. Constructing information networks using one single model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1846–1851. https://doi.org/10.3115/v1/D141198. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 73– 82. http://aclweb.org/anthology/P13-1008. Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016. Leveraging framenet to improve automatic event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 2134–2143. http://www.aclweb.org/anthology/P16-1201. David McClosky, Mihai Surdeanu, and Christopher Manning. 2011. Event extraction as dependency parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 1626–1635. http://aclweb.org/anthology/P11-1163. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, pages 1003– 1011. http://aclweb.org/anthology/P09-1113. Huu Thien Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, pages 365–371. https://doi.org/10.3115/v1/P15-2060. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 300–309. http://www.aclweb.org/anthology/N16-1034. Kevin Reschke, Martin Jankowiak, Mihai Surdeanu, Christopher D Manning, and Daniel Jurafsky. 418 2014. Event extraction using distant supervision. In Proceedings of the Ninth International Conference on Language Resources and Evaluation. pages 4527–4531. http://www.lrecconf.org/proceedings/lrec2014/pdf/1127 Paper.pdf. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and D. Christopher Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, pages 455–465. http://aclweb.org/anthology/D12-1042. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 384–394. http://aclweb.org/anthology/P10-1040. Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention cnns. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1298–1307. http://www.aclweb.org/anthology/P16-1123. Matthew D Zeiler. 2012. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701 https://arxiv.org/pdf/1212.5701. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1753–1762. https://doi.org/10.18653/v1/D15-1203. 419
2017
38
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 420–429 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1039 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 420–429 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1039 Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules Xiaoshi Zhong, Aixin Sun, and Erik Cambria School of Computer Science and Engineering Nanyang Technological University, Singapore {xszhong,axsun,cambria}@ntu.edu.sg Abstract Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods. 1 Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014). Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007, 2010; UzZaman et al., 2013). 1Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b), Gigaword (Parker et al., 2011), WikiWars (Mazur and Dale, 2010), and Tweets. From the analysis we make four findings about time expressions. First, most time expressions are very short, with 80% of time expressions containing no more than three tokens. Second, at least 91.8% of time expressions contain at least one time token. Third, the vocabulary used to express time information is very small, with a small group of keywords. Finally, words in time expressions demonstrate similar syntactic behaviour. All the findings relate to the principle of least effort (Zipf, 1949). That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949). Time expression is part of language and acts as an interface of communication. Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate. According to the findings we propose a typebased approach named SynTime (‘Syn’ stands for syntactic) to recognize time expressions. Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions. Time tokens are the words that explicitly express time information, such as time units (e.g., ‘year’). Modifiers modify time tokens; they appear before or after time tokens, e.g., ‘several’ and ‘ago’ in ‘several years ago.’ Numerals are ordinals and numbers. From free text SynTime first identifies time tokens, then recognizes modifiers and numerals. Naturally, SynTime is a rule-based tagger. The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules. The definition of token type in SynTime is inspired by part420 of-speech in which “linguists group some words of language into classes (sets) which show similar syntactic behaviour.” (Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour. Other rulebased taggers define types for tokens based on their semantic meaning. For example, SUTime defines 5 semantic modifier types, such as frequency modifiers;2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens. (See Section 4.1 for details.) Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves. SynTime instead designs general rules on the token types rather than on the tokens themselves. For example, our general rules do not work on tokens ‘February’ nor ‘1989’ but on their token types ‘MONTH’ and ‘YEAR.’ That is why we call SynTime a type-based approach. More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position. In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion. The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time. The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens. In this paper, we test SynTime on specific domains and specific text types in English. (The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.) Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.3 TimeBank and Tweets are comprehensive datasets while WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text. Experiments show that SynTime achieves comparable results on WikiWars dataset, and significantly outperforms the three state-of-the-art baselines on TimeBank and Tweets 2https://github.com/stanfordnlp/CoreNLP/tree/ master/src/edu/stanford/nlp/time/rules 3Gigaword dataset is not used in our experiments because the labels in the dataset are not the ground truth labels but instead are automatically generated by other taggers. datasets. More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset. To sum up, we make the following contributions. • We analyze time expressions from four datasets and make four findings. The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949). • We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules. SynTime is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages. • We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines. 2 Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007, 2010; UzZaman et al., 2013). The task is divided into two subtasks: recognition and normalization. Rule-based Time Expression Recognition. Rule-based time taggers like GUTime, HeidelTime, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Str¨otgen and Gertz, 2010; Chang and Manning, 2012). HeidelTime (Str¨otgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression. SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014). It first identifies individual words, then expands them to chunks, and finally to time expressions. Rule-based taggers achieve very good results in TempEval exercises. SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens. Moreover, SynTime designs rules in a heuristic way. Machine Learning based Method. Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions. Example features 421 include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013). The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; UzZaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013). Some models obtain good performance, and even achieve the highest F1 of 82.71% on strict match in TempEval-3 (Bethard, 2013). Outside TempEval exercises, Angeli et al. leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012). In the method named UWTime, Lee et al. handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014). The two methods explicitly use linguistic information. In (Lee et al., 2014), especially, CCG could capture rich structure information of language, similar to the rule-based methods. Tabassum et al. focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016). They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens. However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time. Time Expression Normalization. Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Str¨otgen and Gertz, 2010; Llorens et al., 2010; UzZaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013). Because the rule systems have high similarity, Llorens et al. suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012). Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016). Lee et al. (Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al. (Tabassum et al., 2016) use a loglinear algorithm to normalize time expressions. SynTime focuses only on the recognition task. The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012). Table 1: Statistics of the datasets (A tweet here is a document.) Dataset #Docs #Words #TIMEX TimeBank 183 61,418 1,243 Gigaword 2,452 666,309 12,739 WikiWars 22 119,468 2,671 Tweets 942 18,199 1,127 10 20 30 40 50 60 70 80 90 100 1 2 3 4 5 6 7 8 Cumulative percentage Number of words in time expressions TimeBank Gigaword Wikiwars Tweets Figure 1: Length distribution of time expressions 3 Time Expression Analysis 3.1 Dataset We conduct an analysis on four datasets: TimeBank, Gigaword, WikiWars, and Tweets. TimeBank (Pustejovsky et al., 2003b) is a benchmark dataset in TempEval series (Verhagen et al., 2007, 2010; UzZaman et al., 2013), consisting of 183 news articles. Gigaword (Parker et al., 2011) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3. WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010). Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression. Table 1 summarizes the datasets. 3.2 Finding From the four datasets, we analyze their time expressions and make four findings. We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics. Finding 1 Time expressions are very short. More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words. Figure 1 plots the length distribution of time expressions. Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length 422 Table 2: The percentage of time expressions that contain at least one time token, and the average length of time expressions Dataset Percent Average Length TimeBank 94.61 2.00 Gigaword 96.44 1.70 WikiWars 91.81 2.38 Tweets 96.01 1.51 Table 3: Number of distinct words and number of distinct time tokens in time expressions Dataset #Words #Time Tokens TimeBank 130 64 Gigaword 214 80 WikiWars 224 74 Tweets 107 64 of time expressions follow a similar distribution. In particular, the one-word time expressions range from 36.23% in WikiWars to 62.91% in Tweets. In informal communication people tend to use words in minimum length to express time information. The third column in Table 2 reports the average length of time expressions. On average, time expressions contain about two words. Finding 2 More than 91% of time expressions contain at least one time token. The second column in Table 2 reports the percentage of time expressions that contain at least one time token. We find that at least 91.81% of time expressions contain time token(s). (Some time expressions have no time token but depend on other time expressions; in ‘2 to 8 days,’ for example, ‘2’ depends on ‘8 days.’) This suggests that time tokens account for time expressions. Therefore, to recognize time expressions, it is essential to recognize their time tokens. Finding 3 Only a small group of time-related keywords are used to express time information. From the time expressions in all four datasets, we find that the group of keywords used to express time information is small. Table 3 reports the number of distinct words and of distinct time tokens. The words/tokens are manually normalized before counting and their variants are ignored. For example, ‘year’ and ‘5yrs’ are counted as one token ‘year.’ Numerals in the counting are ignored. Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable. Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282. Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets. This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets. In other words, time expressions highly overlap at their time tokens. Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents. For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text. Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD. This indicates that POS could not provide enough information to distinguish time expressions from common words. However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT. Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD. This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way. When seeing this, we realize that this is exactly how linguists define part-of-speech for language.4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language. The four findings all relate to the principle of least effort (Zipf, 1949). That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949). Time expression is part of language and acts as an interface of communication. Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate. To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small. To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral. 4“linguists group some words of language into classes (sets) which show similar syntactic behaviour.” (Manning and Schutze, 1999) 423 General Heuristic Rules 1989, February, 12:55, this year, 3 months ago, ... Time Token, Modifier, Numeral Rule level Type level Token level Figure 2: Layout of SynTime. The layout consists of three levels: token level, type level, and rule level. Token types group the constituent tokens of time expressions. Heuristic rules work on token types, and are independent of specific tokens. 4 SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types. Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level. Token types at the type level group the tokens of time expressions. Heuristic rules lie at the rule level, working on token types rather than on tokens themselves. That is why the heuristic rules are general. For example, the heuristic rules do not work on tokens ‘1989’ nor ‘February,’ but on their token types ‘YEAR’ and ‘MONTH.’ The heuristic rules are only relevant to token types, and are independent of specific tokens. For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens. In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English. The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types. Figure 3 shows the overview of SynTime in practice. Shown on the left-hand side, SynTime is initialized with regular expressions over tokens. After initialization, SynTime can be directly applied on text. On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type. The expansion enables SynTime to recognize time expressions in text from different domains and different text types. Shown on the right-hand side of Figure 3, SynTime recognizes time expression through three main steps. In the first step, SynTime identifies Figure 3: Overview of SynTime. Left-hand side shows the construction of SynTime, with initialization using token regular expressions, and optional expansion using training text. Right-hand side shows the main steps of SynTime recognizing time expressions. time tokens from the POS-tagged raw text. Then around the time tokens SynTime searches for modifiers and numerals to form time segments. In the last step, SynTime transforms the time segments to time expressions. 4.1 SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral. Token types to tokens is like POS tags to words; for example, ‘February’ has a POS tag of NNP and a token type of MONTH. Time Token. We define 15 token types for the time tokens and use their names similar to Joda-Time classes:5 DECADE (-), YEAR (-), SEASON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2). Number in ‘()’ indicates the number of distinct tokens in this token type. ‘-’ indicates that this token type involves changing digits and cannot be counted. Modifier. We define 3 token types for the modifiers according to their possible positions relative to time tokens. Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2). LINKAGE (4) link two time 5http://www.joda.org/joda-time/ 424 tokens. Besides, we define 2 special modifier types, COMMA (1) for comma ‘,’ and IN ARTICLE (2) for indefinite articles ‘a’ and ‘an.’ TimeML (Pustejovsky et al., 2003a) and TimeBank (Pustejovsky et al., 2003b) do not treat most prepositions like ‘on’ as a part of time expressions. Thus SynTime does not collect those prepositions. Numeral. Number in time expressions can be a time token e.g., ‘10’ in ‘October 10, 2016,’ or a modifier e.g., ‘10’ in ‘10 days.’ We define NUMERAL (-) for the ordinals and numbers. SynTime Initialization. The token regular expressions for initializing SynTime are collected from SUTime,6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval3 (Chang and Manning, 2012, 2013). Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions. 4.2 Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions. The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction. 4.2.1 Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions. Some words might cause ambiguity. For example, ‘May’ could be a modal verb, or the fifth month of year. To filter out the ambiguous words, we use POS information. In implementation, we use Stanford POS Tagger;7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2. Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions. In the next two steps, SynTime works on those token types. 4.2.2 Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment. The searching is 6https://github.com/stanfordnlp/CoreNLP/tree/ master/src/edu/stanford/nlp/time/rules 7http://nlp.stanford.edu/software/tagger.shtml PREFIX/the PREFIX/last TIME_UNIT/week … said WEEK/Friday s1 s2 e1 s1 (a) Stand-alone time segment to time expression s1 s2 s1 PREFIX/the NUMERAL/third TIME_UNIT/quarter PREFIX/of YEAR/1984 (b) Merge adjacent time segments s1 s2 s1 MONTH/January NUMERAL/13 YEAR/1951 (c) Merge overlapping time segments s1 s2 s1 MONTH/June NUMERAL/30 COMMA/, YEAR/1990 (d) Merge overlapping time segments s1 s2 e1 s1 NUMERAL/8 LINKAGE/to NUMERAL/20 TIME_UNIT/days (e) Dependent time segment and time segment Figure 4: Example time segments and time expressions. The above labels are from time segment identification; the below labels are for time expression extraction. under simple heuristic rules in which the key idea is to expand the time token’s boundaries. At first, each time token is a time segment. If it is either a PERIOD or DURATION, then no need to further search. Otherwise, search its left and its right for modifiers and numerals. For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching. For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching. Both the left and the right searching stop when reaching a COMMA or LINKAGE or a non-modifier/numeral word. The left searching does not exceed the previous time token; the right searching does not exceed the next time token. A time segment consists of exactly one time token, and zero or some modifiers/numerals. A special kind of time segments do not contain any time token; they depend on other time segments next to them. For example, in ‘8 to 20 days,’ ‘to 20 days’ is a time segment, and ‘8 to’ forms a dependent time segment. (See Figure 4(e).) 4.2.3 Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment. 425 We scan the time segments in a sentence from beginning to the end. A stand-alone time segment is a time expression. (See Figure 4(a).) The focus is to deal with two or more time segments that are adjacent or overlapping. If two time segments s1 and s2 are adjacent, merge them to form a new time segment s1. (See Figure 4(b).) Consider that s1 and s2 overlap at a shared boundary. According to our time segment identification, the shared boundary could be a modifier or a numeral. If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s1 and s2. (See Figure 4(c).) If the word is a LINKAGE, then extract s1 as a time expression and continue scanning. When the shared boundary is a COMMA, merge s1 and s2 only if the COMMA’s previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same. (See Figure 4(d).) Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types. After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types. 4.3 SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule. The expansion requires the words to be added to be annotated manually. We apply the initial SynTime on the time expressions from training text and list the words that are not covered. Whether the uncovered words are added to SynTime is manually determined. The rule for determination is that the added words can not cause ambiguity and should be generic. WikiWars dataset contains a few examples like this: ‘The time Arnold reached Quebec City.’ Words in this example are extremely descriptive, and we do not collect them. In tweets, on the other hand, people may use abbreviations and informal variants; for example, ‘2day’ and ‘tday’ are popular spellings of ‘today.’ Such kind of abbreviations and informal variants will be collected. According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much. In addition, we find that even in tweets people tend to use formal words. In the Twitter word clusters trained from 56 million English tweets,8 the most often used words are the formal words, and their frequencies are much greater than the informal words’. The cluster of ‘today,’9 for example, its most often use is the formal one, ‘today,’ which appears 1,220,829 times; while its second most often use ‘2day’ appears only 34,827 times. The low rate of informal words (e.g., about 3% in ‘today’ cluster) suggests that even in informal environment the manual keyword addition costs little. 5 Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UWTime) on three datasets (i.e., TimeBank, WikiWars, and Tweets). WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text. For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I. 5.1 Experiment Setting Datasets. We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter. For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them. 942 tweets of which each contains at least one time expression. From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions. We therefore roughly consider that SUTime misses about 3% time expressions in tweets. Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank. We finally get 1,127 manually labeled time expressions. For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training. Baseline Methods. We compare SynTime with methods: HeidelTime (Str¨otgen and Gertz, 2010), SUTime (Chang and Manning, 2012), and UW8http://www.cs.cmu.edu/˜ark/TweetNLP/cluster_ viewer.html 9http://www.cs.cmu.edu/˜ark/TweetNLP/paths/ 01111110010.html 426 Table 4: Overall performance. The best results are in bold face and the second best are underlined. Some results are borrowed from their original papers and the papers are indicated by the references. Dataset Method Strict Match Relaxed Match Pr. Re. F1 Pr. Re. F1 TimeBank HeidelTime(Strotgen et al., 2013) 83.85 78.99 81.34 93.08 87.68 90.30 SUTime(Chang and Manning, 2013) 78.72 80.43 79.57 89.36 91.30 90.32 UWTime(Lee et al., 2014) 86.10 80.40 83.10 94.60 88.40 91.40 SynTime-I 91.43 92.75 92.09 94.29 95.65 94.96 SynTime-E 91.49 93.48 92.47 93.62 95.65 94.62 WikiWars HeidelTime(Lee et al., 2014) 85.20 79.30 82.10 92.60 86.20 89.30 SUTime 78.61 76.69 76.64 95.74 89.57 92.55 UWTime(Lee et al., 2014) 87.70 78.80 83.00 97.60 87.60 92.30 SynTime-I 80.00 80.22 80.11 92.16 92.41 92.29 SynTime-E 79.18 83.47 81.27 90.49 95.39 92.88 Tweets HeidelTime 89.58 72.88 80.37 95.83 77.97 85.98 SUTime 76.03 77.97 76.99 88.43 90.68 89.54 UWTime 88.54 72.03 79.44 96.88 78.81 86.92 SynTime-I 89.52 94.07 91.74 93.55 98.31 95.87 SynTime-E 89.20 94.49 91.77 93.20 98.78 95.88 Time (Lee et al., 2014). HeidelTime and SUTime both are rule-based methods, and UWTime is a learning method. When training UWTime on Tweets, we try two settings: (1) train with only Tweets training set; (2) train with TimeBank and Tweets training set. The second setting achieves slightly better result and we report that result. Evaluation Metrics. We follow TempEval-3 and use their evaluation toolkit10 to report Precision, Recall, and F1 in terms of strict match and relaxed match (UzZaman et al., 2013). 5.2 Experiment Result Table 4 reports the overall performance. Among the 18 measures, SynTime-I and SynTime-E achieve 12 best results and 13 second best results. Except the strict match on WikiWars dataset, both SynTime-I and SynTime-E achieve F1 above 91%. For the relaxed match on all three datasets, SynTime-I and SynTime-E achieve recalls above 92%. The high recalls are consistent with our finding that at least 91.81% of time expressions contain time token(s). (See Table 2.) This indicates that SynTime covers most of time tokens. On Tweets dataset, SynTime-I and SynTime-E achieve exceptionally good performance. Their F1 reach 91.74% with 11.37% improvement in strict match and 95.87% with 6.33% improvement in re10http://www.cs.rochester.edu/˜naushad/tempeval3/ tools.zip laxed match. The reasons are that in informal environment people tend to use time expressions in minimum length, (62.91% of one-word time expressions in Tweets; see Figure 1.) the size of time keywords is small, (only 60 distinct time tokens; see Table 3.) and even in tweets people tend to use formal words. (See Section 4.3 for our finding from Twitter word clusters.) For precision, SynTime achieves comparable results in strict match and performs slightly poorer in relaxed match. 5.2.1 SynTime-I vs. Baseline Methods On TimeBank dataset, SynTime-I achieves F1 of 92.09% in strict match and of 94.96% in relaxed match. On Tweets, SynTime-I achieves 91.74% and 95.87%, respectively. It outperforms all the baseline methods. The reason is that for the rulebased time taggers, their rules are designed in a fixed way, lacking flexibility. For example, SUTime could recognize ‘1 year’ but not ‘year 1.’ For the machine learning based methods, some of the features they used actually hurt the modelling. Time expressions involve quite many changing numbers which in themselves affect the pattern recognition. For example, it is difficult to build connection between ‘May 22, 1986’ and ‘February 01, 1989’ at the level of word or of character. One suggestion is to consider a type-based learning method that could use type information. For example, the above two time expressions refer to the same pattern of ‘MONTH NUMERAL COMMA 427 Table 5: Number of time tokens and modifiers for expansion Dataset #Time Tokens #Modifiers TimeBank 3 5 WikiWars 16 21 Tweets 3 2 YEAR’ at the level of token type. POS is a kind of type information. But according to our analysis, POS could not distinguish time expressions from common words. Features need carefully designing. On WikiWars, SynTime-I achieves competitive results in both matches. Time expressions in WikiWars include lots of prepositions and quite a few descriptive time expressions. SynTime could not fully recognize such kinds of time expressions because it follows TimeML and TimeBank. 5.2.2 SynTime-E vs. SynTime-I Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly. This confirms that the size of time words is small, and that SynTime-I covers most of time words. On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall. It improves the recall by 3.25% in strict match and by 2.98% in relaxed match. This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance. 5.3 Limitations SynTime assumes that words are tokenized and POS tagged correctly. In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools. For example, Stanford POS Tagger assigns VBD to the word ‘sat’ in ‘friday or sat’ while whose tag should be NNP. The incorrect tokens and POS tags affect the result. 6 Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior. Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949). Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime. SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion. Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger. Because our heuristic rules are quite simple, SynTime is light-weight and runs in real time. Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens. In this paper, we test SynTime on specific domains and specific text types in English. The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types. Time expression is part of language and follows the principle of least effort. Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995), we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle. In the future we will try our analytical method on other parts of language. Acknowledgments The authors would like to thank the three anonymous reviewers for their insightful comments and constructive suggestions. This research is mainly supported by the Singapore Ministry of Education Research Fund MOE2014-T2-2-066. References Omar Alonso, Jannik Strotgen, Ricardo Baeza-Yates, and Michael Gertz. 2011. Temporal information retrieval: Challenges and opportunities. In Proceedings of 1st International Temporal Web Analytics Workshop. pages 1–8. Gabor Angeli, Christopher D. Manning, and Daniel Jurafsky. 2012. Parsing time: Learning to interpret time expressions. In Proceedings of 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 446– 455. Steven Bethard. 2013. Cleartk-timeml: A minimalist approach to tempeval 2013. In Proceedings of the 7th International Workshop on Semantic Evaluation. pages 10–14. 428 Ricardo Campos, Gael Dias, Alipio M. Jorge, and Adam Jatowt. 2014. Survey of temporal information retrieval and related applications. ACM Computing Surveys 47(2):15. Angel X. Chang and Christopher D. Manning. 2012. Sutime: A library for recognizing and normalizing time expressions. In Proceedings of 8th International Conference on Language Resources and Evaluation. pages 3735–3740. Angel X. Chang and Christopher D. Manning. 2013. Sutime: Evaluation in tempeval-3. In Proceedings of second Joint Conference on Lexical and Computational Semantics (SEM). pages 78–82. Angel X. Chang and Christopher D. Manning. 2014. Tokensregex: Defining cascaded regular expressions over tokens. Technical report, Department of Computer Science, Stanford University. Noam Chomsky. 1986. Knowledge of Language: Its Nature, Origin, and Use. New York: Prager. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research 12:2121–2159. Michele Filannino, Gavin Brown, and Goran Nenadic. 2013. Mantime: Temporal expression identification and normalization in the tempeval-3 challenge. In Proceedings of the 7th International Workshop on Semantic Evaluation. Jerry R. Hobbs, Douglas E. Appelt, John Bear, David Israel, Megumi Kameyama, Mark Stickel, and Mabry Tyson. 1997. Fastus: A cascaded finite-state transducer for extracting information from natrual-language text. In Finite State Devices for Natural Language Processing. pages 383–406. Kenton Lee, Yoav Artzi, Jesse Dodge, and Luke Zettlemoyer. 2014. Context-dependent semantic parsing for time expressions. In Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics. pages 1437–1447. Hector Llorens, Leon Derczynski, Robert Gaizauskas, and Estela Saquete. 2012. Timen: An open temporal expression normalisation resource. In Proceedings of 8th International Conference on Language Resources and Evaluation. pages 3044–3051. Hector Llorens, Estela Saquete, and Borja Navarro. 2010. Tipsem (english and spanish): Evaluating crfs and semantic roles in tempeval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation. pages 284–291. Christopher Manning and Hinrich Schutze. 1999. Foundations of Statistical Natural Language Processing. Cambride: MIT Press. Pawel Mazur and Robert Dale. 2010. Wikiwars: A new corpus for research on temporal expressions. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 913–922. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. Engilish gigaword fifth edition. Steven Pinker. 1995. The language instinct: The new science of language and mind, volume 7529. Penguin. James Pustejovsky, Jose Castano, Robert Ingria, Roser Sauri, Robert Gaizauskas, Andrea Setzer, Graham Katz, and Dragomir Radev. 2003a. Timeml: Robust specification of event and temporal expressions in text. New Directions in Question Answering 3:28–34. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Beth Sundheim, Dragomir Radev, David Day, Lisa Ferro, and Marcia Lazo. 2003b. The timebank corpus. Corpus Linguistics 2003:647–656. Mark Steedman. 1996. Surface Structure and Interpretation. The MIT Press. Jannik Str¨otgen and Michael Gertz. 2010. Heideltime: High quality rule-based extraction and normalization of temporal expressions. In Proceedings of the 5th International Workshop on Semantic Evaluation (SemEval’10). Association for Computational Linguistics, Stroudsburg, PA, USA, pages 321–324. Jannik Strotgen, Julian Zell, and Michael Gertz. 2013. Heideltime: Tuning english and developing spanish resources. In Proceedings of second Joint Conference on Lexical and Computational Semantics (SEM). pages 15–19. Jeniya Tabassum, Alan Ritter, and Wei Xu. 2016. Tweetime: A minimally supervised method for recognizing and normalizing time expressions in twitter. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 307–318. Naushad UzZaman and James F. Allen. 2010. Trips and trios system for tempeval-2: Extracting temporal information from text. In Proceedings of the 5th International Workshop on Semantic Evaluation. pages 276–283. Naushad UzZaman, Hector Llorens, Leon Derczynski, Marc Verhagen, James Allen, and James Pustejovsky. 2013. Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In Proceedings of the 7th International Workshop on Semantic Evaluation. pages 1–9. Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. Semeval-2007 task 15: Tempeval temporal relation identification. In Proceedings of the 4th International Workshop on Semantic Evaluation. pages 75–80. Marc Verhagen, Inderjeet Mani, Roser Sauri, Robert Knippen, Seok Bae Jang, Jessica Littman, Anna Rumshisky, John Phillips, Inderjeet Mani, Roser Sauri, Robert Knippen, Seok Bae Jang, Jessica Littman, Anna Rumshisky, John Phillips, and James Pustejovsky. 2005. Automating temporal annotation with tarqi. In Proceedings of the ACL Interactive Poster and Demonstration Sessions.. pages 81–84. Marc Verhagen, Roser Sauri, Tommaso Caselli, and James Pustejovsky. 2010. Semeval-2010 task 13: Tempeval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation. pages 57–62. George Zipf. 1949. Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology. Addison-Wesley Press, Inc. 429
2017
39
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 34–43 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1004 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 34–43 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1004 Neural Relation Extraction with Multi-lingual Attention Yankai Lin1, Zhiyuan Liu1∗, Maosong Sun1,2 1 Department of Computer Science and Technology, State Key Lab on Intelligent Technology and Systems, National Lab for Information Science and Technology, Tsinghua University, Beijing, China 2 Jiangsu Collaborative Innovation Center for Language Competence, Jiangsu, China Abstract Relation extraction has been widely used for finding unknown relational facts from the plain text. Most existing methods focus on exploiting mono-lingual data for relation extraction, ignoring massive information from the texts in various languages. To address this issue, we introduce a multi-lingual neural relation extraction framework, which employs monolingual attention to utilize the information within mono-lingual texts and further proposes cross-lingual attention to consider the information consistency and complementarity among cross-lingual texts. Experimental results on real-world datasets show that our model can take advantage of multi-lingual texts and consistently achieve significant improvements on relation extraction as compared with baselines. The source code of this paper can be obtained from https://github. com/thunlp/MNRE 1 Introduction People build many large-scale knowledge bases (KBs) to store structured knowledge about the real world, such as Wikidata1 and DBpedia2. KBs are playing an important role in many AI and NLP applications such as information retrieval and question answering. The facts in KBs are typically organized in the form of triplets, e.g., (New York, CityOf, United States). Since existing KBs are far from complete and new facts are growing infinitely, meanwhile manual annotation of these knowledge is time-consuming and ∗ Corresponding author: Zhiyuan Liu ([email protected]). 1http://www.wikidata.org/ 2http://wiki.dbpedia.org/ human-intensive, many works have been devoted to automated extraction of novel facts from various Web resources, where relation extraction (RE) from plain texts is one the most important knowledge sources. Among various methods for relation extraction, distant supervision is the most promising approach (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012), which can automatically generate training instances via aligning KBs and texts to address the issue of lacking supervised data. As the development of deep learning, Zeng et al. (2015) introduce neural networks to extract relations with automatically learned features from training instances. To address the wrong labelling issue of distant supervision data, Lin et al. (2016) further employ sentence-level attention mechanism in neural relation extraction, and achieves the state-of-the-art performance. However, most RE systems concentrate on extracting relational facts from mono-lingual data. In fact, people describe knowledge about the world using various languages. And people speaking different languages also share similar knowledge about the world due to the similarities of human experiences and human cognitive systems. For instance, though New York and United States are expressed as 纽约and 美国respectively in Chinese, both Americans and Chinese share the fact that “New York is a city of USA.” It is straightforward to build mono-lingual RE systems separately for each single language. But if so, it won’t be able to take full advantage of diverse information hidden in the data of various languages. Multi-lingual data will benefit relation extraction for the following two reasons: 1. Consistency. According to the distant supervision data in our experiments3, we find that over half of Chinese 3The data is generated by aligning Wikidata with Chinese 34 Relation City in English 1. New York is a city in the northeastern United States. Chinese 1. 纽约փӄ美国纽约ᐔђ঍䜞ཝ㾵⍁ ⋵ዮθᱥ美国ㅢжཝคᐸ਀ㅢжཝ⑥. (New York is in the United States New York and on the Atlantic coast of the southeast Atlantic, is the largest city and largest port in the United States.) 2. 纽约ᱥ美国ӰਙᴶཐⲺคᐸ. (New York is the most populous city in the United States) Table 1: An example of Chinese sentences and English sentence about the same relational fact (New York, CityOf, United States). Important parts are highlighted with bold face. and English sentences are longer than 20 words, in which only several words are related to the relational facts. Take Table 1 for example. The first Chinese sentence has over 20 words, in which only “纽约” (New York) and “ᱥ美国ㅢжཝค ᐸ” (is the biggest city in the United States) actually directly reflect the relational fact CityOf. It is thus non-trivial to locate and learn these relational patterns from complicated sentences for relation extraction. Fortunately, a relational fact is usually expressed with certain patterns in various languages, and the correspondence of these patterns among languages is substantially consistent. The pattern consistency among languages provides us augmented clues to enhance relational pattern learning for relation extraction. 2. Complementarity. From our experiment data, we also find that 42.2% relational facts in English data and 41.6% ones in Chinese data are unique. Moreover, for nearly half of relations, the number of sentences expressing relational facts of these relations varies a lot in different languages. It is thus straightforward that the texts in different languages can be complementary to each other, especially from those resource-rich languages to resource-poor languages, and improve the overall performance of relation extraction. To take full consideration of these issues, we propose Multi-lingual Attention-based Neural Relation Extraction (MNRE). We first employ a convolutional neural network (CNN) to embed the relational patterns in sentences into real-valued vectors. Afterwards, to consider the complementarity of all informative sentences in various lanBaidu Baike and English Wikipedia articles, which will be introduced in details in the section of experiments. guages and capture the consistency of relational patterns, we apply mono-lingual attention to select the informative sentences within each language and propose cross-lingual attention to take advantages of pattern consistency and complementarity among languages. Finally, we classify relations according to the global vector aggregated from all sentence vectors weighted by mono-lingual attention and cross-lingual attention. In experiments, we build training instances via distant supervision by aligning Wikidata with Chinese Baidu Baike and English Wikipedia articlesθ and evaluate the performance of relation extraction in both English and Chinese. The experimental results show that our framework achieves significant improvement for relation extraction as compared to all baseline methods including both monolingual and multi-lingual ones. It indicates that our framework can take full advantages of sentences in different languages and better capture sophisticated patterns expressing relations. 2 Related Work Recent years KBs have been widely used on various AI and NLP applications. As an important approach to enrich KBs, relation extraction from plain text has attracted many research interests. Relation extraction typically classifies each entity pair into various relation types according to supporting sentences that the both entities appear, which needs human-labelled relationspecific training instances. Many works have been invested to relation extraction including kernelbased model (Zelenko et al., 2003), embeddingbased model (Gormley et al., 2015), CNN-based models (Zeng et al., 2014; dos Santos et al., 2015), and RNN-based model (Socher et al., 2012). Nevertheless, these RE systems are insufficient due to the lack of training data. To address this issue, Mintz et al. (2009) align plain text with Freebase to automatically generate training instances following the distant supervision assumption. To further alleviate the wrong labelling problem, Riedel et al. (2010) model distant supervision for relation extraction as a multiinstance single-label learning problem, and Hoffmann et al. (2011); Surdeanu et al. (2012) regard it as a multi-instance multi-label learning problem. Recently, Zeng et al. (2015) attempt to connect neural networks with distant supervision following the expressed-at-least-once assumption. Lin 35 Relation Embedding Sentence Representation Chinese English English Chinese Output Representation Att Att Mono-lingual and Cross-lingual Attention 2 s 2 1x 1 1x 1 1 nx 1 2x 2 2x 2 2 nx Att Att 2 1s 1 2 s 1s 1 2 Figure 1: Overall architecture of our multi-lingual attention which contains two languages including English and Chinese. The solid lines indicates mono-lingual attention and the dashed lines indicates cross-lingual attention. et al. (2016) further utilize sentence-level attention mechanism to consider all informative sentences jointly. Most existing RE systems are absorbed in extracting relations from mono-lingual data, ignoring massive information lying in texts from multiple languages. In this area, Faruqui and Kumar (2015) present a language independent open domain relation extraction system, and Verga et al. (2015) further employ Universal Schema to combine OpenIE and link-prediction perspective for multi-lingual relation extraction. Both the works focus on multi-lingual transfer learning and learn a predictive model on a new language for existing KBs, by leveraging unified representation learning for cross-lingual entities. Different from these works, our framework aims to jointly model the texts in multiple languages to enhance relation extraction with distant supervision. To the best of our knowledge, this is the first effort to multi-lingual neural relation extraction. The scope of multi-lingual analysis has been widely considered in many tasks besides relation extraction, such as sentiment analysis (Boiy and Moens, 2009), cross-lingual document summarization (Boudin et al., 2011), information retrieval in Web search (Dong et al., 2014) and so on. 3 Methodology In this section, we describe our proposed MNRE framework in detail. The key motivation of MNRE is that, for each relational fact, the relation patterns in sentences of different languages should be substantially consistent, and MNRE can utilize the pattern consistency and complementarity among languages to achieve better results for relation extraction. Formally, given two entities, their corresponding sentences in m different languages are defined as T = {S1, S2, . . . , Sm}, where Sj = {x1 j, x2 j, . . . , xnj j } corresponds to the sentence set in the jth language with nj sentences. Our model measures a score f(T, r) for each relation r, which is expected to be high when r is the valid one, otherwise low. The MNRE framework contains two main components: 1. Sentence Encoder. Given a sentence x and two target entities, we employ CNN to encode relation patterns in x into a distributed representation x. The sentence encoder can also be implemented with GRU (Cho et al., 2014) or LSTM (Hochreiter and Schmidhuber, 1997). In experiments, we find CNN can achieve a better trade-off between computational efficiency and performance effectiveness. Thus, in this paper, we focus on CNN as the sentence encoder. 2. Multi-lingual Attention. With all sentences in various languages encoded into distributed vector representations, we apply mono-lingual and cross-lingual attentions to capture those informative sentences with accurate relation patterns. MNRE further aggregates these sentence vectors with weighted attentions into global representations for relation prediction. We introduce the two components in detail as follows. 3.1 Sentence Encoder The sentence encoder aims to transform a sentence x into its distributed representation x via CNN. First, it embeds the words in the input sentence 36 into dense real-valued vectors. Next, it employs convolutional, max-pooling and non-linear transformation layers to construct the distributed representation of the sentence, i.e., x. 3.1.1 Input Representation Following (Zeng et al., 2014), we transform each input word into the concatenation of two kinds of representations: (1) a word embedding which captures syntactic and semantic meanings of the word, and (2) a position embedding which specifies the position information of this word with respect to two target entities. In this way, we can represent the input sentence as a vector sequence w = {w1, w2, . . .} with wi ∈Rd, where d = da+db×2. (da and db are the dimensions of word embeddings and position embeddings respectively) 3.1.2 Convolution, Max-pooling and Non-linear Layers After encoding the input sentence, we use a convolutional layer to extract the local features, maxpooling, and non-linear layers to merge all local features into a global representation. First, the convolutional layer extracts local features by sliding a window of length l over the sentence and perform a convolution within each sliding window. Formally, the output of convolutional layer for the ith sliding window is computed as: pi = Wwi−l+1:i + b, (1) where wi−l+1:i indicates the concatenation of l word embeddings within the i-th window, W ∈ Rdc×(l×d) is the convolution matrix and b ∈Rdc is the bias vector. ( dc is the dimension of output embeddings of the convolution layer) After that, we combines all local features via a max-pooling operation and apply a hyperbolic tangent function to obtain a fixed-sized sentence vector for the input sentence. Formally, the ith element of the output vector x ∈Rdc is calculated as: [x]j = tanh ( max i (pij) ) . (2) The final vector x is expected to efficiently encode relation patterns about target entities from the input sentence. Here, instead of max pooling operation, we can use piecewise max pooling operation adopted by PCNN (Zeng et al., 2015) which is a variation of CNN to better capture the relation patterns in the input sentence. 3.2 Multi-lingual Attention To exploit the information of the sentences from all languages, our model adopts two kinds of attention mechanisms for multi-lingual relation extraction, including: (1) the mono-lingual attention which selects the informative sentences within one language and (2) the cross-lingual attention which measures the pattern consistency among languages. 3.2.1 Mono-lingual Attention To address the wrong-labelling issue in distant supervision, we follow the idea of sentence-level attention (Lin et al., 2016) and set mono-lingual attention for MNRE. It is intuitive that each human language has its own characteristics. Hence we adopt different mono-lingual attentions to deemphasize those noisy sentences within each language. More specifically, for the j-th language and the sentence set Sj, we aim to aggregate all sentence vectors into a real-valued vector Sj for relation prediction. The mono-lingual vector Sj is computed as a weighted sum of those sentence vectors xi j: Sj = ∑ i αi jxi j, (3) where αi j is the attention score of each sentence vector xi j, defined as: αi j = exp(ei j) ∑ k exp(ek j ), (4) where ei j is referred as a query-based function which scores how well the input sentence xi j reflects its labelled relation r. There are many ways to obtain ei j, and here we simply compute ei as the inner product: ei j = xi j · rj. (5) Here rj is the query vector of the relation r with respect to the j-th language. 3.2.2 Cross-lingual Attention Besides mono-lingual attention, we propose crosslingual attention for neural relation extraction to better take advantages of multi-lingual data. The key idea of cross-lingual attention is to emphasize those sentences which have strong consistency among different languages. On the basis of mono-lingual attention, cross-lingual attention 37 is capable of further removing unlikely sentences and resulting in more concentrated and informative sentences, with the factor of consistent correspondence of relation patterns among different languages. Cross-lingual attention works similar to monolingual attention. Suppose j indicates a language and k is a another language (k ̸= j). Formally, the cross-lingual representation Sjk is defined as a weighted sum of those sentence vectors xi j in the jth language: Sjk = ∑ i αi jkxi j, (6) where αi jk is the cross-lingual attention score of each sentence vector xi j with respect to the kth language. The cross-lingual attention αi jk is defined as: αi jk = exp(ei jk) ∑ k exp(ek jk), (7) where ei jk is referred as a query-based function which scores the consistency between the input sentence xi j in the jth language and the relation patterns in the kth language for expressing the semantic meanings of the labelled relation r. Similar to the mono-lingual attention, we compute ei jk as follows: ei jk = xi j · rk, (8) where rk is the query vector of the relation r with respect to the kth language. Note that, for convenience, we denote those mono-lingual attention vectors Sj as Sjj in the remainder of this paper. 3.3 Prediction For each entity pair and its corresponding sentence set T in m languages, we can obtain m × m vectors {Sjk|j, k ∈{1, . . . , m}} from the neural networks with multi-lingual attention. Those vectors with j = k are mono-lingual attention vectors, and those with j ̸= k are cross-lingual attention vectors. We take all vectors {Sjk} together and define the overall score function f(T, r) as follows: f(T, r) = ∑ j,k∈{1,...,m} log p(r|Sjk, θ), (9) where p(r|Sjk, θ) is the probability of predicting the relation r conditional on Sjk, computed using a softmax layer as follows: p(r|Sjk, θ) = softmax(MSjk + d), (10) where d ∈Rnr is a bias vector, nr is the number of relation types and M ∈Rnr×Rc is a global relation matrix initialized randomly. To better consider the characteristics of each human language, we further introduce Rk as the specific relation matrix of the kth language. Here we simply define Rk as composed by rk in Eq. (8). Hence, Eq. (10) can be extended to: p(r|Sjk, θ) = softmax[(Rk + M)Sjk + d], (11) where M encodes global patterns for predicting relations and Rk encodes those language-specific characteristics. Note that, in the training phase, the vectors {Sjk} are constructed using Eq. (3) and (6) using the labelled relation. In the testing phase, since the relation is not known in advance, we will construct different vectors {Sjk} for each possible relation r to compute f(T, r) for relation prediction. 3.4 Optimization Here we introduce the learning and optimization details of our MNRE framework. We define the objective function as follows: J(θ) = s ∑ i=1 f(Ti, ri), (12) where s indicates the number of all entity pairs with each corresponding to a sentence set in different languages, and θ indicates all parameters of our framework. To solve the optimization problem, we adopt mini-batch stochastic gradient descent (SGD) to minimize the objective function. For learning, we iterate by randomly selecting a mini-batch from the training set until converge. 4 Experiments We first introduce the datasets and evaluation metrics used in the experiments. Next, we use a validation set to determine the best model parameters and choose the best model via early stopping. Afterwards, we show the effectiveness of our framework of considering pattern complementarity and consistency for multi-lingual relation extraction by quantitative and qualitative analysis. Finally, we compare the effect of two kinds of relation matrices in Eq. (11) used for prediction. 38 4.1 Datasets and Evaluation Metrics We generate a new multi-lingual relation extraction dataset to evaluate our MNRE framework. Without loss of generality, the experiments focus on relation extraction from two languages including English and Chinese. In this dataset, the Chinese instances are generated by aligning Chinese Baidu Baike with Wikidata, and the English instances are generated by aligning English Wikipedia articles with Wikidata. The relational facts of Wikidata in this dataset are divided into three parts for training, validation and testing respectively. There are 176 relations including a special relation NA indicating there is no relation between entities. And we set both validation and testing sets for Chinese and English parts contain the same facts. We list the statistics about the dataset in Table 2. Dataset #Rel #Sent #Fact Train 1,022,239 47,638 English Valid 176 80,191 2,192 Test 162,018 4,326 Train 940,595 42,536 Chinese Valid 176 82,699 2,192 Test 167,224 4,326 Table 2: Statistics of the dataset. We follow previous works (Mintz et al., 2009) and investigate the performance of RE systems using the held-out evaluation, by comparing the relational facts discovered by RE systems from the testing set with those facts in KB. The evaluation method assumes that if a RE system accurately finds more relational facts in KBs from the testing set, it will achieve better performance for relation extraction. The held-out evaluation provides an approximate measure of RE performance without time-consuming human evaluation. In experiments, we report the precision/recall curves as the evaluation metric. 4.2 Experimental Settings We tune the parameters of our MNRE framework by grid searching using validation set. For training, we set the iteration number over all the training data as 15. The best models were selected by early stopping using the evaluation results on the validation set. In Table 3 we show the best setting of all parameters used in our experiments. Hyper-parameter value Window size w 3 Sentence embedding size dc 230 Word dimension da 50 Position dimension db 5 Batch size B 160 Learning rate λ 0.001 Dropout probability p 0.5 Table 3: Parameter settings. 4.3 Effectiveness of Consistency To demonstrate the effectiveness of considering pattern consistency among languages, we empirically compare different methods through held-out evaluation. We select CNN proposed in (Zeng et al., 2014) as our sentence encoder and implement it by ourselves which achieves comparable results as the authors reported on their experimental dataset NYT104. And we compare the performance of our framework with the [P]CNN model trained with only English data ([P]CNN-En), only Chinese data ([P]CNN-Zh), a joint model ([P]CNN+joint) which predicts using [P]CNN-En and [P]CNN-Zh jointly, and another joint model with shared embeddings ([P]CNN+share) which trains [P]CNN-En and [P]CNN-Zh with common relation embedding matrices. From Fig. 2, we have the following observations: (1) Both [P]CNN+joint and [P]CNN+share achieve better performances as compared to [P]CNN-En and [P]CNN-Zh. It indicates that utilizing Chinese and English sentences jointly is beneficial to extracting novel relational facts. The reason is that those relational facts that are discovered from multiple languages are more reliable to be true. (2) CNN+share only has similar performance as compared to CNN+joint, even through a bit worse when recall ranges from 0.1 to 0.2. Besides, PCNN+share performs worse than PCNN+joint nearly over the entire range of recall. It demonstrates that a simple combination of multiple languages by sharing relation embedding matrices cannot further capture more implicit correlations among various languages. (3) Our MNRE model achieves the highest precision over the entire range of recall as compared to other methods including [P]CNN+joint and [P]CNN+share models. By grid searching of 4http://iesl.cs.umass.edu/riedel/ecml/ 39 CNN+Zh CNN+En MNRE Sentence — Medium Low 1. Barzun is a commune in the Pyrénées-Atlantiques department in the NouvelleAquitaine region of south-western France. — Medium High 2. Barzun was born in Créteil , France Medium — Low 3. ֒ѰԄ⌋国〱≇ࡦ美国ᶛⲺ京ቌ⸛䇼࠼ᆆθᐪቊ䎔ф㧧᰸޻ቊg⢯䠂 ᷍Ƚᗭᘶ⢯g哜ށ୆㓩ㅿӰж䚉θ൞߭ᡎᰬᵕ〥ᶷ৸ф美国Ⲻޢާ⸛䇼⭕ ⍱…(As a top intellectual immigrating from France to the United States, Barzun, together with Lionel Trilling and Dwight Macdonald, actively participated in public knowledge life in the United States during the cold war …) Medium — High 4. ᐪቊ䎔ӄ 1907 ᒪ࠰⭕ӄ⌋国жѠ⸛䇼࠼ᆆᇬᓣθ1920 ᒪ䎪美Ⱦ(Barzun was born in a French intellectual family in 1907 and went to America in 1920.) Table 4: An example of our multi-lingual attention. Low, medium and high indicate the attention weights. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision CNN−Zh CNN−En CNN+joint CNN+share MNRE(CNN) 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision PCNN−Zh PCNN−En PCNN+joint PCNN+share MNRE(PCNN) Figure 2: Top: Aggregated precision/recall curves of CNN-En, CNN-Zh, CNN+joint, CNN+share, and MNRE(CNN). Bottom: Aggregated precision/recall curves of PCNN-En, PCNN-Zh, PCNN+joint, PCNN+share, and MNRE(PCNN) parameters for these baseline models, we can observe that both [P]CNN+joint and [P]CNN+share cannot achieve competitive results compared to MNRE even when increasing the size of the output layer. This indicates that no more useful information can be captured by simply increasing model size. On the contrary, our proposed MNRE model can successfully improve multi-lingual relation extraction by considering pattern consistency among languages. We further give an example of cross-lingual attention in Table 4. It shows four sentences having the highest and lowest Chinese-to-English and English-to-Chinese attention weights respectively with respect to the relation PlaceOfBirth in MNRE. We highlight the entity pairs in bold face. For comparison, we also show their attention weights from CNN+Zh and CNN+En. From the table we find that, although all of the four sentences actually express the fact that Barzun was born in France, the first and third sentences contain much more noisy information that may confuse RE systems. By considering pattern consistency between sentences in two languages with cross-lingual attention, MNRE can identify the second and fourth sentences that unambiguously express the relation PlaceOfBirth with higher attention as compared to CNN+Zh and CNN+En. 4.4 Effectiveness of Complementarity To demonstrate the effectiveness of considering pattern complementarity among languages, we empirically compare the following methods through held-out evaluation: MNRE for English (MNRE-En) and MNRE for Chinese (MNRE-Zh) which only use the mono-lingual vectors to predict relations, and [P]CNN-En and [P]CNN-Zh models. Fig. 3 shows the aggregated precision/recall curves of the four models for both CNN and PCNN. From the figure, we find that: (1) MNRE-En and MNRE-Zh outperform [P]CNN-En and [P]CNN-Zh almost in entire range of recall. It indicates that by jointly training with multi-lingual attention, both Chinese and English relation extractors are beneficial from those sentences from the other language. (2) Although [P]CNN-En underperforms as compared to [P]CNN-Zh, MNRE-En is comparable to MNRE-Zh by jointly training through multilingual attention. It demonstrates that both Chi40 Relation #Sent-En #Sent-Zh CNN-En CNN-Zh MNRE-En MNRE-Zh Contains 993 6984 17.95 69.87 73.72 75.00 HeadquartersLocation 1949 210 43.04 0.00 41.77 50.63 Father 1833 983 64.71 77.12 86.27 83.01 CountryOfCitizenship 25322 15805 95.22 93.23 98.41 98.21 Table 5: Detailed results (precision@1) of some specific relations. #Sent-En and #Sent-Zh indicate the numbers of English/Chinese sentences which are labelled with the relations. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision CNN−Zh CNN−En MNRE(CNN)−Zh MNRE(CNN)−En 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision PCNN−Zh PCNN−En MNRE(PCNN)−Zh MNRE(PCNN)−En Figure 3: Top: Aggregate precision/recall curves of CNN-En, CNN-Zh, MNRE(CNN)-En and MNRE(CNN)-Zh. Bottom: Aggregate precision/recall curves of PCNN-En, PCNN-Zh, MNRE(PCNN)-En and MNRE(PCNN)-Zh. nese and English relation extractors can take full advantages of texts in both languages via our propose multi-lingual attention scheme. Table 5 shows the detailed results (in precision@1) of some specific relations of which the training instances are un-balanced on English and Chinese sides. From the table, we can see that: (1) For the relation Contains of which the number of English training instances is only 1/7 of Chinese ones, CNN-En gets much worse performance as compared to CNN-Zh due to the lack of training data. Nevertheless, by jointly training through multi-lingual attention, MNRE(CNN)En is comparable to and slightly better than MNRE(CNN)-Zh. (2) For the relation HeadquartersLocation of which the number of Chinese training instances is only 1/9 of English ones, CNN-Zh even cannot predict any correct results. The reason is perhaps that, CNN-Zh of the relation is not sufficiently trained because there are only 210 Chinese training instances for this relation. Similarly, by jointly training through multi-lingual attention, MNRE(CNN)-En and MNRE(CNN)-Zh both achieve promising results. (3) For the relations Father and CountryOfCitizenship of which the sentence number in English and Chinese are not so un-balanced, our MNRE can still improve the performance of relation extraction on both English and Chinese sides. 4.5 Comparison of Relation Matrix For relation prediction, we use two kinds of relation matrices including: M that considers the global consistency of relations, and R that considers the specific characteristics of relations for each language. To measure the effect of the two relation matrices, we compare the performance of MNRE using the both matrices with those only using M (MNRE-M) and only using R (MNRE-R). Fig. 4 shows the precision-recall curves for each method. From the figure, we observe that:t (1) The performance of MNRE-M is much worse than both MNRE-R and MNRE. It indicates that we cannot just use global relation matrix for relation prediction. The reason is that each language has its own specific characteristics to express relation patterns, which cannot be well integrated into a single relation matrix. (2) MNRE(CNN)-R has similar performance as compared to MNRE(CNN) when the recall is low. However, it has a sharp decline when the recall reaches 0.25. It suggests there also exists global consistency of relation patterns among languages which cannot be neglected. Hence, we should combine both M and R together for multi-lingual relation extraction, as proposed in our MNRE 41 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision MNRE(CNN)−R MNRE(CNN)−M MNRE(CNN) 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision MNRE(PCNN)−R MNRE(PCNN)−M MNRE(PCNN) Figure 4: Top: Aggregated precion/recall curves of MNRE(CNN)-M, MNRE(CNN)-R and MNRE. Bottom: Aggregated precion/recall curves of MNRE(PCNN)-M, MNRE(PCNN)-R and MNRE(PCNN). framework. 5 Conclusion In this paper, we introduce a neural relation extraction framework with multi-lingual attention to take pattern consistency and complementarity among multiple languages into consideration. We evaluate our framework on multi-lingual relation extraction task, and the results show that our framework can effectively model relation patterns among languages and achieve state-of-the-art results. We will explore the following directions as future work: (1) In this paper, we only consider sentence-level multi-lingual attention for relation extraction. In fact, we find that the word alignment information may be also helpful for capturing relation patterns. Hence, the word-level multi-lingual attention, which may discover implicit alignments between words in multiple languages, will further improve multi-lingual relation extraction. We will explore the effectiveness of word-level multilingual attention for relation extraction as our future work. (2) MNRE can be flexibly implemented in the scenario of multiple languages, and this paper focuses on two languages of English and Chinese. In future, we will extend MNRE to more languages and explore its significance. Acknowledgments This work is supported by the 973 Program (No. 2014CB340501), the National Natural Science Foundation of China (NSFC No. 61572273, 61532010), and the Key Technologies Research and Development Program of China (No. 2014BAK04B03). This work is also funded by the Natural Science Foundation of China (NSFC) and the German Research Foundation(DFG) in Project Crossmodal Learning, NSFC 61621136008 / DFC TRR-169. References Erik Boiy and Marie-Francine Moens. 2009. A machine learning approach to sentiment analysis in multilingual web texts. Information retrieval 12(5):526–558. Florian Boudin, Stéphane Huet, and Juan-Manuel Torres-Moreno. 2011. A graph-based approach to cross-language multi-document summarization. Polibits (43):113–118. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 . Meiping Dong, Yong Cheng, Yang Liu, Jia Xu, Maosong Sun, Tatsuya Izuha, and Jie Hao. 2014. Query lattice for translation retrieval. In Proceedings of COLING. pages 2031–2041. Cıcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of ACL. volume 1, pages 626–634. Manaal Faruqui and Shankar Kumar. 2015. Multilingual open relation extraction using cross-lingual projection. arXiv preprint arXiv:1503.06450 . Matthew R. Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. In Proceedings of EMNLP. pages 1774–1784. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation pages 1735–1780. 42 Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of ACL-HLT. pages 541–550. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of ACL. volume 1, pages 2124–2133. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of ACLIJCNLP. pages 1003–1011. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of ECML-PKDD. pages 148–163. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP-CoNLL. pages 1201–1211. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR 15(1):1929–1958. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of EMNLP. pages 455–465. Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, and Andrew McCallum. 2015. Multilingual relation extraction using compositional universal schema. arXiv preprint arXiv:1511.06396 . Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. JMLR 3(Feb):1083–1106. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of EMNLP. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING. pages 2335–2344. 43
2017
4
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 430–439 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1040 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 430–439 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1040 Learning with Noise: Enhance Distantly Supervised Relation Extraction with Dynamic Transition Matrix Bingfeng Luo1, Yansong Feng∗1, Zheng Wang2, Zhanxing Zhu3, Songfang Huang4, Rui Yan1 and Dongyan Zhao1 1ICST, Peking University, China 2School of Computing and Communications, Lancaster University, UK 3Peking University, China 4IBM China Research Lab, China {bf luo,fengyansong,zhanxing.zhu,ruiyan,zhaody}@pku.edu.cn [email protected] [email protected] Abstract Distant supervision significantly reduces human efforts in building training data for many classification tasks. While promising, this technique often introduces noise to the generated training data, which can severely affect the model performance. In this paper, we take a deep look at the application of distant supervision in relation extraction. We show that the dynamic transition matrix can effectively characterize the noise in the training data built by distant supervision. The transition matrix can be effectively trained using a novel curriculum learning based method without any direct supervision about the noise. We thoroughly evaluate our approach under a wide range of extraction scenarios. Experimental results show that our approach consistently improves the extraction results and outperforms the state-of-the-art in various evaluation scenarios. 1 Introduction Distant supervision (DS) is rapidly emerging as a viable means for supporting various classification tasks – from relation extraction (Mintz et al., 2009) and sentiment classification (Go et al., 2009) to cross-lingual semantic analysis (Fang and Cohn, 2016). By using knowledge learned from seed examples to label data, DS automatically prepares large scale training data for these tasks. While promising, DS does not guarantee perfect results and often introduces noise to the generated data. In the context of relation extraction, DS works by considering sentences containing both the subject and object of a <subj, rel, obj> triple as its supports. However, the generated data are not always perfect. For instance, DS could match the knowledge base (KB) triple, <Donald Trump, born-in, New York> in false positive contexts like Donald Trump worked in New York City. Prior works (Takamatsu et al., 2012; Ritter et al., 2013) show that DS often mistakenly labels real positive instances as negative (false negative) or versa vice (false positive), and there could be confusions among positive labels as well. These noises can severely affect training and lead to poorlyperforming models. Tackling the noisy data problem of DS is nontrivial, since there usually lacks of explicit supervision to capture the noise. Previous works have tried to remove sentences containing unreliable syntactic patterns (Takamatsu et al., 2012), design new models to capture certain types of noise or aggregate multiple predictions under the at-leastone assumption that at least one of the aligned sentences supports the triple in KB (Riedel et al., 2010; Surdeanu et al., 2012; Ritter et al., 2013; Min et al., 2013). These approaches represent a substantial leap forward towards making DS more practical. however, are either tightly couple to certain types of noise, or have to rely on manual rules to filter noise, thus unable to scale. Recent breakthrough in neural networks provides a new way to reduce the influence of incorrectly labeled data by aggregating multiple training instances attentively for relation classification, without explicitly characterizing the inherent noise (Lin et al., 2016; Zeng et al., 2015). Although promising, modeling noise within neural network architectures is still in its early stage and much remains to be done. In this paper, we aim to enhance DS noise modeling by providing the capability to explicitly characterize the noise in the DS-style training data 430 within neural networks architectures. We show that while noise is inevitable, it is possible to characterize the noise pattern in a unified framework along with its original classification objective. Our key insight is that the DS-style training data typically contain useful clues about the noise pattern. For example, we can infer that since some people work in their birthplaces, DS could wrongly label a training sentence describing a working place as a born-in relation. Our novel approach to noisy modeling is to use a dynamically-generated transition matrix for each training instance to (1) characterize the possibility that the DS labeled relation is confused and (2) indicate its noise pattern. To tackle the challenge of no direct guidance over the noise pattern, we employ a curriculum learning based training method to gradually model the noise pattern over time, and utilize trace regularization to control the behavior of the transition matrix during training. Our approach is flexible – while it does not make any assumptions about the data quality, the algorithm can make effective use of the data-quality prior knowledge to guide the learning procedure when such clues are available. We apply our method to the relation extraction task and evaluate under various scenarios on two benchmark datasets. Experimental results show that our approach consistently improves both extraction settings, outperforming the state-of-theart models in different settings. Our work offers an effective way for tackling the noisy data problem of DS, making DS more practical at scale. Our main contributions are to (1) design a dynamic transition matrix structure to characterize the noise introduced by DS, and (2) design a curriculum learning based framework to adaptively guide the training procedure to learn with noise. 2 Problem Definition The task of distantly supervised relation extraction is to extract knowledge triples, <subj, rel, obj>, from free text with the training data constructed by aligning existing KB triples with a large corpus. Specifically, given a triple in KB, DS works by first retrieving all the sentences containing both subj and obj of the triple, and then constructing the training data by considering these sentences as support to the existence of the triple. This task can be conducted in both the sentence and the bag levels. The former takes a sentence s containing Encoder sentences embeddings Prediction Noise Modeling predicted distr. transition matrix Transformation 3 1 2 4 Observed distr. Figure 1: Overview of our approach both subj and obj as input, and outputs the relation expressed by the sentence between subj and obj. The latter setting alleviates the noisy data problem by using the at-least-one assumption that at least one of the retrieved sentences containing both subj and obj supports the <subj, rel, obj> triple. It takes a bag of sentences S as input where each sentence s ∈S contains both subj and obj, and outputs the relation between subj and obj expressed by this bag. 3 Our approach In order to deal with the noisy training data obtained through DS, our approach follows four steps as depicted in Figure 1. First, each input sentence is fed to a sentence encoder to generate an embedding vector. Our model then takes the sentence embeddings as input and produce a predicted relation distribution, p, for the input sentence (or the input sentence bag). At the same time, our model dynamically produces a transition matrix, T, which is used to characterize the noise pattern of sentence (or the bag). Finally, the predicted distribution is multiplied by the transition matrix to produce the observed relation distribution, o, which is used to match the noisy relation labels assigned by DS while the predicted relation distribution p serves as output of our model during testing. One of the key challenges of our approach is on determining the element values of the transition matrix, which will be described in Section 4. 3.1 Sentence-level Modeling Sentence Embedding and Prediction In this work, we use a piecewise convolutional neural network (Zeng et al., 2015) for sentence encoding, but other sentence embedding models can also be used. We feed the sentence embedding to a full connection layer, and use softmax to generate the predicted relation distribution, p. Noise Modeling First, each sentence embedding x, generated b sentence encoder, is passed to a full connection layer as a non-linearity to obtain the sentence embedding xn used specifically for noise modeling. We then use softmax to calculate the 431 transition matrix T, for each sentence: Tij = exp(wT ijxn + b) P|C| j=1 exp(wT ijxn + b) (1) where Tij is the conditional probability for the input sentence to be labeled as relation j by DS, given i as the true relation, b is a scalar bias, |C| is the number of relations, wij is the weight vector characterizing the confusion between i and j. Here, we dynamically produce a transition matrix, T, specifically for each sentence, but with the parameters (wij) shared across the dataset. By doing so, we are able to adaptively characterize the noise pattern for each sentence, with a few parameters only. In contrast, one could also produce a global transition matrix for all sentences, with much less computation, where one need not to compute T on the fly (see Section 6.1). Observed Distribution When we characterize the noise in a sentence with a transition matrix T, if its true relation is i, we can assume that i might be erroneously labeled as relation j by DS with probability Tij. We can therefore capture the observed relation distribution, o, by multiplying T and the predicted relation distribution, p: o = TT · p (2) where o is then normalized to ensure P i oi = 1. Rather than using the predicted distribution p to directly match the relation labeled by DS (Zeng et al., 2015; Lin et al., 2016), here we utilize o to match the noisy labels during training and still use p as output during testing, which actually captures the procedure of how the noisy label is produced and thus protects p from the noise. 3.2 Bag Level Modeling Bag Embedding and Prediction One of the key challenges for bag level model is how to aggregate the embeddings of individual sentences into the bag level. In this work, we experiment two methods, namely average and attention aggregation (Lin et al., 2016). The former calculates the bag embedding, s, by averaging the embeddings of each sentence, and then feed it to a softmax classifier for relation classification. The attention aggregation calculates an attention value, aij, for each sentence i in the bag with respect to each relation j, and aggregates to the bag level as sj, by the following equations1: sj = n X i aijxi; aij = exp(xT i rj) Pn i′ exp(xT i′rj) (3) where xi is the embedding of sentence i, n the number of sentences in the bag, and rj is the randomly initialized embedding for relation j. In similar spirit to (Lin et al., 2016), the resulting bag embedding sj is fed to a softmax classifier to predict the probability of relation j for the given bag. Noise Modeling Since the transition matrix addresses the transition probability with respect to each true relation, the attention mechanism appears to be a natural fit for calculating the transition matrix in bag level. Similar to attention aggregation above, we calculate the bag embedding with respect to each relation using Equation 3, but with a separate set of relation embeddings r′j. We then calculate the transition matrix, T, by: Tij = exp(sT i r′j + bi) P|C| j=1 exp(sT i r′j + bi) (4) where si is the bag embedding regarding relation i, and r′j is the embedding for relation j. 4 Curriculum Learning based Training One of the key challenges of this work is on how to train and produce the transition matrix to model the noise in the training data without any direct guidance and human involvement. A straightforward solution is to directly align the observed distribution, o, with respect to the noisy labels by minimizing the sum of the two terms: CrossEntropy(o)+Regularization. However, doing so does not guarantee that the prediction distribution, p, will match the true relation distribution. The problem is at the beginning of the training, we have no prior knowledge about the noise pattern, thus, both T and p are less reliable, making the training procedure be likely to trap into some poor local optimum. Therefore, we require a technique to guide our model to gradually adapt to the noisy training data, e.g., learning something simple first, and then trying to deal with noises. 1While (Lin et al., 2016) use bilinear function to calculate aij, we simply use dot product since we find these two functions perform similarly in our experiments. 432 Fortunately, this is exactly what curriculum learning can do. The idea of curriculum learning (Bengio et al., 2009) is simple: starting with the easiest aspect of a task, and leveling up the difficulty gradually, which fits well to our problem. We thus employ a curriculum learning framework to guide our model to gradually learn how to characterize the noise. Another advantage is to avoid falling into poor local optimum. With curriculum learning, our approach provides the flexibility to combine prior knowledge of noise, e.g., splitting a dataset into reliable and less reliable subsets, to improve the effectiveness of the transition matrix and better model the noise. 4.1 Trace Regularization Before proceeding to training details, we first discuss how we characterize the noise level of the data by controlling the trace of its transition matrix. Intuitively, if the noise is small, the transition matrix T will tend to become an identity matrix, i.e., given a set of annotated training sentences, the observed relations and their true relations are almost identical. Since each row of T sums to 1, the similarity between the transition matrix and the identity matrix can be represented by its trace, trace(T). The larger the trace(T) is, the larger the diagonal elements are, and the more similar the transition matrix T is to the identity matrix, indicating a lower level of noise. Therefore, we can characterize the noise pattern by controlling the expected value of trace(T) in the form of regularization. For example, we will expect a larger trace(T) for reliable data, but a smaller trace(T) for less reliable data. Another advantage of employing trace regularization is that it could help reduce the model complexity and avoid overfitting. 4.2 Training To tackle the challenge of no direct guidance over the noise patterns, we implement a curriculum learning based training method to first train the model without considerations for noise. In other words, we first focus on the loss from the prediction distribution p , and then take the noise modeling into account gradually along the training process, i.e., gradually increasing the importance of the loss from the observed distribution o while decreasing the importance of p. In this way, the prediction branch is roughly trained before the model managing to characterize the noise, thus avoids being stuck into poor local optimum. We thus design to minimize the following loss function: L = N X i=1 −((1 −α)log(oiyi) + αlog(piyi)) −βtrace(Ti) (5) where 0<α≤1 and β>0 are two weighting parameters, yi is the relation assigned by DS for the i-th instance, N the total number of training instances, oiyi is the probability that the observed relation for the i-th instance is yi, and piyi is the probability to predict relation yi for the i-th instance. Initially, we set α=1, and train our model completely by minimizing the loss from the prediction distribution p. That is, we do not expect to model the noise, but focus on the prediction branch at this time. As the training progresses, the prediction branch gradually learns the basic prediction ability. We then decrease α and β by 0<ρ<1 (α∗=ρα and β∗=ρβ) every τ epochs, i.e., learning more about the noise from the observed distribution o and allowing a relatively smaller trace(T) to accommodate more noise. The motivation behind is to put more and more effort on learning the noise pattern as the training proceeds, with the essence of curriculum learning. This gradually learning paradigm significantly distinguishes from prior work on noise modeling for DS seen to date. Moreover, as such a method does not rely on any extra assumptions, it can serve as our default training method for T. With Prior Knowledge of Data Quality On the other hand, if we happen to have prior knowledge about which part of the training data is more reliable and which is less reliable, we can utilize this knowledge as guidance to design the curriculum. Specifically, we can build a curriculum by first training the prediction branch on the reliable data for several epochs, and then adding the less reliable data to train the full model. In this way, the prediction branch is roughly trained before exposed to more noisy data, thus is less likely to fall into poor local optimum. Furthermore, we can take better control of the training procedure with trace regularization, e.g., encouraging larger trace(T) for reliable subset and smaller trace(T) for less relaibale ones. Specifically, we propose to minimize: L = M X m=1 Nm X i=1 −log(omi,ymi) −βmtrace(Tmi) (6) 433 where βm is the regularization weight for the m-th data subset, M is the total number of subsets, Nm the number of instances in m-th subset, and Tmi, ymi and omi,ymi are the transition matrix, the relation labeled by DS and the observed probability of this relation for the i-th training instance in the m-th subset, respectively. Note that different from Equation 5, this loss function does not need to initiate training by minimizing the loss regarding the prediction distribution p, since one can easily start by learning from the most reliable split first. We also use trace regularization for the most reliable subset, since there are still some noise annotations inevitably appearing in this split. Specifically, we expect its trace(T) to be large (using a positive β) so that the elements of T will be centralized to the diagonal and T will be more similar to the identity matrix. As for the less reliable subset, we expect the trace(T) to be small (using a negative β) so that the elements of the transition matrix will be diffusive and T will be less similar to the identity matrix. In other words, the transition matrix is encouraged to characterize the noise. Note that this loss function only works for sentence level models. For bag level models, since reliable and less reliable sentences are all aggregated into a sentence bag, we can not determine which bag is reliable and which is not. However, bag level models can still build a curriculum by changing the content of a bag, e.g., keeping reliable sentences in the bag first, then gradually adding less reliable ones, and training with Equation 5, which could benefit from the prior knowledge of data quality as well. 5 Evaluation Methodology Our experiments aim to answer two main questions: (1) is it possible to model the noise in the training data generated through DS, even when there is no prior knowledge to guide us? and (2) whether the prior knowledge of data quality can help our approach better handle the noise. We apply our approach to both sentence level and bag level extraction models, and evaluate in the situations where we do not have prior knowledge of the data quality as well as where such prior knowledge is available. 5.1 Datasets We evaluate our approach on two datasets. TIMERE We build TIMERE by using DS to align time-related Wikidata (Vrandeˇci´c and Kr¨otzsch, 2014) KB triples to Wikipedia text. It contains 278,141 sentences with 12 types of relations between an entity mention and a time expression. We choose to use time-related relations because time expressions speak for themselves in terms of reliability. That is, given a KB triple <e, rel, t> and its aligned sentences, the finergrained the time expression t appears in the sentence, the more likely the sentence supports the existence of this triple. For example, a sentence containing both Alphabet and October-2-2015 is very likely to express the inception-time of Alphabet, while a sentence containing both Alphabet and 2015 could instead talk about many events, e.g., releasing financial report of 2015, hiring a new CEO, etc. Using this heuristics, we can split the dataset into 3 subsets according to different granularities of the time expressions involved, indicating different levels of reliability. Our criteria for determining the reliability are as follows. Instances with full date expressions, i.e., Year-Month-Day, can be seen as the most reliable data, while those with partial date expressions, e.g., Month-Year and Year-Only, are considered as less reliable. Negative data are constructed heuristically that any entity-time pairs in a sentence without corresponding triples in Wikidata are treated as negative data. During training, we can access 184,579 negative and 77,777 positive sentences, including 22,214 reliable, 2,094 and 53,469 less reliable ones. The validation set and test set are randomly sampled from the reliable (full-date) data for relatively fair evaluations and contains 2,776, 2,771 positive sentences and 5,143, 5,095 negative sentences, respectively. ENTITYRE is a widely-used entity relation extraction dataset, built by aligning triples in Freebase to the New York Times (NYT) corpus (Riedel et al., 2010). It contains 52 relations, 136,947 positive and 385,664 negative sentences for training, and 6,444 positive and 166,004 negative sentences for testing. Unlike TIMERE, this dataset does not contain any prior knowledge about the data quality. Since the sentence level annotations in ENTITYRE are too noisy to serve as gold standard, we only evaluate bag-level models on ENTITYRE, a standard practice in previous works (Surdeanu et al., 2012; Zeng et al., 2015; Lin et al., 2016). 434 5.2 Experimental Setup Hyper-parameters We use 200 convolution kernels with widow size 3. During training, we use stochastic gradient descend (SGD) with batch size 20. The learning rates for sentence-level and bag-level models are 0.1 and 0.01, respectively. Sentence level experiments are performed on TIMERE, using 100-d word embeddings pretrained using GloVe (Pennington et al., 2014) on Wikipedia and Gigaword (Parker et al., 2011), and 20-d vectors for distance embeddings. Each of the three subsets of TIMERE is added after the previous phase has run for 15 epochs. The trace regularization weights are β1 = 0.01, β2 = −0.01 and β3 = −0.1, respectively, from the reliable to the most unreliable, with the ratio of β3 and β2 fixed to 10 or 5 when tuning. Bag level experiments are performed on both TIMERE and ENTITYRE. For TIMERE, we use the same parameters as above. For ENTITYRE, we use 50-d word embeddings pre-trained on the NYT corpus using word2vec (Mikolov et al., 2013), and 5-d vectors for distance embedding. For both datasets, α and β in Eq. 5 are initialized to 1 and 0.1, respectively. We tried various decay rates, {0.95, 0.9, 0.8}, and steps, {3, 5, 8}. We found that using a decay rate of 0.9 with step of 5 gives best performance in most cases. Evaluation Metric The performance is reported using the precision-recall (PR) curve, which is a standard evaluation metric in relation extraction. Specifically, the extraction results are first ranked decreasingly by their confidence scores, then the precision and recall are calculated by setting the threshold to be the score of each extraction result one by one. Naming Conventions We evaluate our approach under a wide range of settings for sentence level (sent ) and bag level (bag ) models: (1) mix: trained on all three subsets of TIMERE mixed together; (2) reliable: trained using the reliable subset of TIMERE only; (3) PR: trained with prior knowledge of annotation quality, i.e., starting from the reliable data and then adding the unreliable data; (4) TM: trained with dynamic transition matrix; (5) GTM: trained with a global transition matrix. In bag level, we also investigate the performance of average aggregation ( avg) and attention aggregation ( att). 0.0 0.2 0.4 0.6 0.8 0.80 0.85 0.90 0.95 1.00 sent_mix_TM sent_PR_seg2_TM sent_PR_TM Precision Recall sent_mix sent_reliable sent_PR Figure 2: Sentence Level Results on TIMERE 6 Experimental Results 6.1 Performance on TIMERE Sentence Level Models The results of sentence level models on TIMERE are shown in Figure 2. We can see that mixing all subsets together (sent mix) gives the worst performance, significantly worse than using the reliable subset only (sent reliable). This suggests the noisy nature of the training data obtained through DS and properly dealing with the noise is the key for DS for a wider range of applications. When getting help from our dynamic transition matrix, the model (sent mix TM) significantly improves sent mix, delivering the same level of performance as sent reliable in most cases. This suggests that our transition matrix can help to mitigate the bad influence of noisy training instances. Now let us consider the PR scenario where one can build a curriculum by first training on the reliable subset, then gradually moving to both reliable and less reliable data. We can see that, this simple curriculum learning based model (sent PR) further outperforms sent reliable significantly, indicating that the curriculum learning framework not only reduces the effect of noise, but also helps the model learn from noisy data. When applying the transition matrix approach into this curriculum learning framework using one reliable subset and one unreliable subset generated by mixing our two less reliable subsets, our model (sent PR seg2 TM) further improves sent PR by utilizing the dynamic transition matrix to model the noise. It is not surprising that when we use all three subsets separately, our model (sent PR TM) significantly outperforms all other models by a large margin. 435 0.0 0.2 0.4 0.6 0.8 0.90 0.92 0.94 0.96 0.98 1.00 Precision Recall bag_att_mix bag_att_reliable bag_att_PR bag_att_mix_TM bag_att_PR_TM (a) Attention Aggregation 0.0 0.2 0.4 0.6 0.8 0.90 0.92 0.94 0.96 0.98 1.00 Precision Recall bag_avg_mix bag_avg_reliable bag_avg_PR bag_avg_mix_TM bag_avg_PR_TM (b) Average Aggregation Figure 3: Bag Level Results on TIMERE Bag Level Models In this setting, we first look at the performance of the bag level models with attention aggregation. The results are shown in Figure 3(a). Consider the comparison between the model trained on the reliable subset only (bag att reliable) and the one trained on the mixed dataset (bag att mix). In contrast to the sentence level, bag att mix outperforms bag att reliable by a large margin, because bag att mix has taken the at-least-one assumption into consideration through the attention aggregation mechanism (Eq. 3), which can be seen as a denoising step within the bag. This may also be the reason that when we introduce either our dynamic transition matrix (bag att mix TM) or the curriculum of using prior knowledge of data quality (bag att PR) into the bag level models, the improvement regarding bag att mix is not as significant as in the sentence level. However, when we apply our dynamic transition matrix into the curriculum built upon prior knowledge of data quality (bag att PR TM), the performance gets further improved. This happens especially in the high precision part compared to bag att PR. We also note that the bag level’s at-least-one assumption does not always hold, and there are still false negative and false positive problems. Therefore, using our transition matrix approach with or without prior knowledge of data quality, i.e., bag att mix TM and bag att PR TM, both improve the performance, and bag att PR TM performs slightly better. The results of bag level models with average aggregation are shown in Figure 3(b), where the relative ranking of various settings is similar to those with attention aggregation. A notable difference 0.0 0.2 0.4 0.6 0.8 0.90 0.92 0.94 0.96 0.98 1.00 sent_PR sent_PR_GTM sent_PR_TM bag_att_PR bag_att_PR_GTM bag_att_PR_TM Precision Recall Figure 4: Global TM v.s. Dynamic TM is that both bag avg PR and bag avg mix TM improve bag avg mix by a larger margin compared to that in the attention aggregation setting. The reason may be that the average aggregation mechanism is not as good as the attention aggregation in denoising within the bag, which leaves more space for our transition matrix approach or curriculum learning with prior knowledge to improve. Also note that bag avg reliable performs best in the very-low-recall region but worst in general. This is because that it ranks higher the sentences expressing either birth-date or death-date, the simplest but the most common relations in the dataset, but fails to learn other relations with limited or noisy training instances, given its relatively simple aggregation strategy. Global v.s. Dynamic Transition Matrix We also compare our dynamic transition matrix method with the global transition matrix method, which maintains only one transition matrix for all training instances. Specifically, instead of dynam436 ically generating a transition matrix for each datum, we first initialize an identity matrix T′ ∈ R|C|×|C|, where |C| is the number of relations (including no-relation). Then the global transition matrix T is built by applying softmax to each row of T′ so that P j Tij = 1: Tij = eT ′ ij P|C| j=1 eT ′ ij (7) where Tij and T ′ ij are the elements in the ith row and jth column of T and T′. The element values of matrix T′ are also updated via backpropagation during training. As shown in Figure 4, using one global transition matrix ( GTM) is also beneficial and improves both the sentence level (sent PR) and bag level (bag att PR) models. However, since the global transition matrix only captures the global noise pattern, it fails to characterize individuals with subtle differences, resulting in a performance drop compared to the dynamic one ( TM). Case Study We find our transition matrix method tends to obtain more significant improvement on noisier relations. For example, time of spacecraft landing is noisier than time of spacecraft launch since compared to the launching of a spacecraft, there are fewer sentences containing the landing time of a spacecraft that talks directly about the landing. Instead, many of these sentences tend to talk about the activities of the crew. Our sent PR TM model improves the F1 of time of spacecraft landing and time of spacecraft launch over sent PR by 9.09% and 2.78%, respectively. The transition matrix makes more significant improvement on time of spacecraft landing since there are more noisy sentences for our method to handle, which results in more significant improvement on the quality of the training data. 6.2 Performance on ENTITYRE We evaluate our bag level models on ENTITYRE. As shown in Figure 5, it is not surprising that the basic model with attention aggregation (att) significantly outperforms the average one (avg), where att in our bag embedding is similar in spirit to (Lin et al., 2016), which has reported the-state-of-the-art performance on ENTITYRE. When injected with our transition matrix approach, both att TM and avg TM clearly outperform their basic versions. 0.0 0.1 0.2 0.3 0.4 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision Recall avg att avg_TM att_TM Figure 5: Results on ENTITYRE Method P@R 10 P@R 20 P@R 30 Mintz 39.88 28.55 16.81 MultiR 60.94 36.41 MIML 60.75 33.82 avg 58.04 51.25 42.45 avg TM 58.56 52.35 43.59 att 61.51 56.36 45.63 att TM 67.24 57.61 44.90 Table 1: Comparison with feature-based methods. P@R 10/20/30 refers to the precision when recall equals 10%, 20% and 30%. Similar to the situations in TIMERE, since att has taken the at-least-one assumption into account through its attention-based bag embedding mechanism, thus the improvement made by att TM is not as large as by avg TM. We also include the comparison with three feature-based methods: Mintz (Mintz et al., 2009) is a multiclass logistic regression model; MultiR (Hoffmann et al., 2011) is a probabilistic graphical model that can handle overlapping relations; MIML (Surdeanu et al., 2012) is also a probabilistic graphical model but operates in the multiinstance multi-label paradigm. As shown in Table 1, although traditional feature-based methods have reasonable results in the low recall region, their performances drop quickly as the recall goes up, and MultiR and MIML did not even reach the 30% recall. This indicates that, while humandesigned featurs can effectively capture certain relation patterns, their coverage is relatively low. On the other hand, neural network models have more stable performance across different recalls, and att TM performs generally better than other models, indicating again the effectiveness of our transition matrix method. 437 7 Related Work In addition to relation extraction, distant supervision (DS) is shown to be effective in generating training data for various NLP tasks, e.g., tweet sentiment classification (Go et al., 2009), tweet named entity classifying (Ritter et al., 2011), etc. However, these early applications of DS do not well address the issue of data noise. In relation extraction (RE), recent works have been proposed to reduce the influence of wrongly labeled data. The work presented by (Takamatsu et al., 2012) removes potential noisy sentences by identifying bad syntactic patterns at the preprocessing stage. (Xu et al., 2013) use pseudorelevance feedback to find possible false negative data. (Riedel et al., 2010) make the at-leastone assumption and propose to alleviate the noise problem by considering RE as a multi-instance classification problem. Following this assumption, people further improves the original paradigm using probabilistic graphic models (Hoffmann et al., 2011; Surdeanu et al., 2012), and neural network methods (Zeng et al., 2015). Recently, (Lin et al., 2016) propose to use attention mechanism to reduce the noise within a sentence bag. Instead of characterizing the noise, these approaches only aim to alleviate the effect of noise. The at-least-one assumption is often too strong in practice, and there are still chances that the sentence bag may be false positive or false negative. Thus it is important to model the noise pattern to guide the learning procedure. (Ritter et al., 2013) and (Min et al., 2013) try to employ a set of latent variables to represent the true relation. Our approach differs from them in two aspects. We target noise modeling in neutral networks while they target probabilistic graphic models. We further advance their models by providing the capability to model the fine-grained transition from the true relation to the observed, and the flexibility to combine indirect guidance. Outside of NLP, various methods have been proposed in computer vision to model the data noise using neural networks. (Sukhbaatar et al., 2015) utilize a global transition matrix with weight decay to transform the true label distribution to the observed. (Reed et al., 2014) use a hidden layer to represent the true label distribution but try to force it to predict both the noisy label and the input. (Chen and Gupta, 2015; Xiao et al., 2015) first estimate the transition matrix on a clean dataset and apply to the noisy data. Our model shares similar spirit with (Misra et al., 2016) in that we all dynamically generate a transition matrix for each training instance, but, instead of using vanilla SGD, we train our model with a novel curriculum learning training framework with trace regularization to control the behavior of transition matrix. In NLP, the only work in neural-network-based noise modeling is to use one single global transition matrix to model the noise introduced by crosslingual projection of training data (Fang and Cohn, 2016). Our work advances them through generating a transition matrix dynamically for each instance, to avoid using one single component to characterize both reliable and unreliable data. 8 Conclusions In this paper, we investigate the noise problem inherent in the DS-style training data. We argue that the data speak for themselves by providing useful clues to reveal their noise patterns. We thus propose a novel transition matrix based method to dynamically characterize the noise underlying such training data in a unified framework along the original prediction objective. One of our key innovations is to exploit a curriculum learning based training method to gradually learn to model the underlying noise pattern without direct guidance, and to provide the flexibility to exploit any prior knowledge of the data quality to further improve the effectiveness of the transition matrix. We evaluate our approach in two learning settings of the distantly supervised relation extraction. The experimental results show that the proposed method can better characterize the underlying noise and consistently outperform start-of-the-art extraction models under various scenarios. Acknowledgement This work is supported by the National High Technology R&D Program of China (2015AA015403); the National Natural Science Foundation of China (61672057, 61672058); KLSTSPI Key Lab. of Intelligent Press Media Technology; the UK Engineering and Physical Sciences Research Council under grants EP/M01567X/1 (SANDeRs) and EP/M015793/1 (DIVIDEND); and the Royal Society International Collaboration Grant (IE161012). 438 References Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In ICML. ACM, pages 41–48. Xinlei Chen and Abhinav Gupta. 2015. Webly supervised learning of convolutional networks. In ICCV. pages 1431–1439. Meng Fang and Trevor Cohn. 2016. Learning when to trust distant supervision: An application to lowresource pos tagging using cross-lingual projection. In CONLL. pages 178–186. Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford 1(12). Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of ACL. pages 541–550. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In ACL. volume 1, pages 2124–2133. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. pages 3111–3119. Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for relation extraction with an incomplete knowledge base. In HLT-NAACL. pages 777–782. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL. pages 1003– 1011. Ishan Misra, C Lawrence Zitnick, Margaret Mitchell, and Ross Girshick. 2016. Seeing through the human reporting bias: Visual classifiers from noisy humancentric labels. In CVPR. pages 2930–2939. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword fifth edition, linguistic data consortium. Technical report, Linguistic Data Consortium, Philadelphia. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532– 1543. Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. 2014. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 . Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, pages 148–163. Alan Ritter, Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an experimental study. In EMNLP. Association for Computational Linguistics, pages 1524–1534. Alan Ritter, Luke Zettlemoyer, Mausam, and Oren Etzioni. 2013. Modeling missing data in distant supervision for information extraction. TACL 1:367–378. Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. 2015. Training convolutional networks with noisy labels. In ICLR. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In EMNLP-CoNLL. pages 455–465. Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In ACL. pages 721–729. Denny Vrandeˇci´c and Markus Kr¨otzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM 57(10):78–85. Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. 2015. Learning from massive noisy labeled data for image classification. In CVPR. pages 2691–2699. Wei Xu, Raphael Hoffmann, Le Zhao, and Ralph Grishman. 2013. Filling knowledge base gaps for distant supervision of relation extraction. In ACL. pages 665–670. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In EMNLP. pages 1753–1762. 439
2017
40
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 440–450 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1041 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 440–450 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1041 A Syntactic Neural Model for General-Purpose Code Generation Pengcheng Yin Language Technologies Institute Carnegie Mellon University [email protected] Graham Neubig Language Technologies Institute Carnegie Mellon University [email protected] Abstract We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing datadriven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches. 1 Introduction Every programmer has experienced the situation where they know what they want to do, but do not have the ability to turn it into a concrete implementation. For example, a Python programmer may want to “sort my list in descending order,” but not be able to come up with the proper syntax sorted(my list, reverse=True) to realize his intention. To resolve this impasse, it is common for programmers to search the web in natural language (NL), find an answer, and modify it into the desired form (Brandt et al., 2009, 2010). However, this is time-consuming, and thus the software engineering literature is ripe with methods to directly generate code from NL descriptions, mostly with hand-engineered methods highly tailored to specific programming languages (Balzer, 1985; Little and Miller, 2009; Gvero and Kuncak, 2015). In parallel, the NLP community has developed methods for data-driven semantic parsing, which attempt to map NL to structured logical forms executable by computers. These logical forms can be general-purpose meaning representations (Clark and Curran, 2007; Banarescu et al., 2013), formalisms for querying knowledge bases (Tang and Mooney, 2001; Zettlemoyer and Collins, 2005; Berant et al., 2013) and instructions for robots or personal assistants (Artzi and Zettlemoyer, 2013; Quirk et al., 2015; Misra et al., 2015), among others. While these methods have the advantage of being learnable from data, compared to the programming languages (PLs) in use by programmers, the domain-specific languages targeted by these works have a schema and syntax that is relatively simple. Recently, Ling et al. (2016) have proposed a data-driven code generation method for high-level, general-purpose PLs like Python and Java. This work treats code generation as a sequence-tosequence modeling problem, and introduce methods to generate words from character-level models, and copy variable names from input descriptions. However, unlike most work in semantic parsing, it does not consider the fact that code has to be well-defined programs in the target syntax. In this work, we propose a data-driven syntaxbased neural network model tailored for generation of general-purpose PLs like Python. In order to capture the strong underlying syntax of the PL, we define a model that transduces an NL statement into an Abstract Syntax Tree (AST; Fig. 1(a), § 2) for the target PL. ASTs can be deterministically generated for all well-formed programs using standard parsers provided by the PL, and thus give us a way to obtain syntax information with minimal engineering. Once we generate an AST, we can use deterministic generation tools to convert the AST into surface code. We hypothesize 440 Production Rule Role Explanation Call 7! expr[func] expr*[args] keyword*[keywords] Function Call . func: the function to be invoked . args: arguments list . keywords: keyword arguments list If 7! expr[test] stmt*[body] stmt*[orelse] If Statement . test: condition expression . body: statements inside the If clause . orelse: elif or else statements For 7! expr[target] expr*[iter] stmt*[body] For Loop . target: iteration variable . iter: enumerable to iterate over . body: loop body . orelse: else statements stmt*[orelse] FunctionDef 7! identifier[name] arguments*[args] Function Def. . name: function name . args: function arguments . body: function body stmt*[body] Table 1: Example production rules for common Python statements (Python Software Foundation, 2016) that such a structured approach has two benefits. First, we hypothesize that structure can be used to constrain our search space, ensuring generation of well-formed code. To this end, we propose a syntax-driven neural code generation model. The backbone of our approach is a grammar model (§ 3) which formalizes the generation story of a derivation AST into sequential application of actions that either apply production rules (§ 3.1), or emit terminal tokens (§ 3.2). The underlying syntax of the PL is therefore encoded in the grammar model a priori as the set of possible actions. Our approach frees the model from recovering the underlying grammar from limited training data, and instead enables the system to focus on learning the compositionality among existing grammar rules. Xiao et al. (2016) have noted that this imposition of structure on neural models is useful for semantic parsing, and we expect this to be even more important for general-purpose PLs where the syntax trees are larger and more complex. Second, we hypothesize that structural information helps to model information flow within the neural network, which naturally reflects the recursive structure of PLs. To test this, we extend a standard recurrent neural network (RNN) decoder to allow for additional neural connections which reflect the recursive structure of an AST (§ 4.2). As an example, when expanding the node ? in Fig. 1(a), we make use of the information from both its parent and left sibling (the dashed rectangle). This enables us to locally pass information of relevant code segments via neural network connections, resulting in more confident predictions. Experiments (§ 5) on two Python code generation tasks show 11.7% and 9.3% absolute improvements in accuracy against the state-of-the-art system (Ling et al., 2016). Our model also gives competitive performance on a standard semantic parsing benchmark1. 1Implementation available at https://github. com/neulab/NL2code 2 The Code Generation Problem Given an NL description x, our task is to generate the code snippet c in a modern PL based on the intent of x. We attack this problem by first generating the underlying AST. We define a probabilistic grammar model of generating an AST y given x: p(y|x). The best-possible AST ˆy is then given by ˆy = arg max y p(y|x). (1) ˆy is then deterministically converted to the corresponding surface code c.2 While this paper uses examples from Python code, our method is PLagnostic. Before detailing our approach, we first present a brief introduction of the Python AST and its underlying grammar. The Python abstract grammar contains a set of production rules, and an AST is generated by applying several production rules composed of a head node and multiple child nodes. For instance, the first rule in Tab. 1 is used to generate the function call sorted(·) in Fig. 1(a). It consists of a head node of type Call, and three child nodes of type expr, expr* and keyword*, respectively. Labels of each node are noted within brackets. In an AST, non-terminal nodes sketch the general structure of the target code, while terminal nodes can be categorized into two types: operation terminals and variable terminals. Operation terminals correspond to basic arithmetic operations like AddOp.Variable terminal nodes store values for variables and constants of built-in data types3. For instance, all terminal nodes in Fig. 1(a) are variable terminal nodes. 3 Grammar Model Before detailing our neural code generation method, we first introduce the grammar model at its core. Our probabilistic grammar model defines the generative story of a derivation AST. We fac2We use astor library to convert ASTs into Python code. 3bool, float, int, str. 441 Expr root expr[value] Call expr*[args] keyword*[keywords] Name str(sorted) expr[func] expr Name str(my_list) keyword str(reverse) expr[value] Name str(True) Action Flow Parent Feeding Apply Rule Generate Token GenToken with Copy (a) (b) Input: Code: . . . Figure 1: (a) the Abstract Syntax Tree (AST) for the given example code. Dashed nodes denote terminals. Nodes are labeled with time steps during which they are generated. (b) the action sequence (up to t14) used to generate the AST in (a) torize the generation process of an AST into sequential application of actions of two types: • APPLYRULE[r] applies a production rule r to the current derivation tree; • GENTOKEN[v] populates a variable terminal node by appending a terminal token v. Fig. 1(b) shows the generation process of the target AST in Fig. 1(a). Each node in Fig. 1(b) indicates an action. Action nodes are connected by solid arrows which depict the chronological order of the action flow. The generation proceeds in depth-first, left-to-right order (dotted arrows represent parent feeding, explained in § 4.2.1). Formally, under our grammar model, the probability of generating an AST y is factorized as: p(y|x) = T Y t=1 p(at|x, a<t), (2) where at is the action taken at time step t, and a<t is the sequence of actions before t. We will explain how to compute the action probabilities p(at|·) in Eq. (2) in § 4. Put simply, the generation process begins from a root node at t0, and proceeds by the model choosing APPLYRULE actions to generate the overall program structure from a closed set of grammar rules, then at leaves of the tree corresponding to variable terminals, the model switches to GENTOKEN actions to generate variables or constants from the open set. We describe this process in detail below. 3.1 APPLYRULE Actions APPLYRULE actions generate program structure, expanding the current node (the frontier node at time step t: nft) in a depth-first, left-to-right traversal of the tree. Given a fixed set of production rules, APPLYRULE chooses a rule r from the subset that has a head matching the type of nft, and uses r to expand nft by appending all child nodes specified by the selected production. As an example, in Fig. 1(b), the rule Call 7! expr. . . expands the frontier node Call at time step t4, and its three child nodes expr, expr* and keyword* are added to the derivation. APPLYRULE actions grow the derivation AST by appending nodes. When a variable terminal node (e.g., str) is added to the derivation and becomes the frontier node, the grammar model then switches to GENTOKEN actions to populate the variable terminal with tokens. Unary Closure Sometimes, generating an AST requires applying a chain of unary productions. For instance, it takes three time steps (t9 −t11) to generate the sub-structure expr* 7! expr 7! Name 7! str in Fig. 1(a). This can be effectively reduced to one step of APPLYRULE action by taking the closure of the chain of unary productions and merging them into a single rule: expr* 7!⇤ str. Unary closures reduce the number of actions needed, but would potentially increase the size of the grammar. In our experiments we tested our model both with and without unary closures (§ 5). 3.2 GENTOKEN Actions Once we reach a frontier node nft that corresponds to a variable type (e.g., str), GENTOKEN actions are used to fill this node with values. For generalpurpose PLs like Python, variables and constants have values with one or multiple tokens. For in442 stance, a node that stores the name of a function (e.g., sorted) has a single token, while a node that denotes a string constant (e.g., a=‘hello world’) could have multiple tokens. Our model copes with both scenarios by firing GENTOKEN actions at one or more time steps. At each time step, GENTOKEN appends one terminal token to the current frontier variable node. A special </n> token is used to “close” the node. The grammar model then proceeds to the new frontier node. Terminal tokens can be generated from a predefined vocabulary, or be directly copied from the input NL. This is motivated by the observation that the input description often contains out-ofvocabulary (OOV) variable names or literal values that are directly used in the target code. For instance, in our running example the variable name my list can be directly copied from the the input at t12. We give implementation details in § 4.2.2. 4 Estimating Action Probabilities We estimate action probabilities in Eq. (2) using attentional neural encoder-decoder models with an information flow structured by the syntax trees. 4.1 Encoder For an NL description x consisting of n words {wi}n i=1, the encoder computes a context sensitive embedding hi for each wi using a bidirectional Long Short-Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997), similar to the setting in (Bahdanau et al., 2014). See supplementary materials for detailed equations. 4.2 Decoder The decoder uses an RNN to model the sequential generation process of an AST defined as Eq. (2). Each action step in the grammar model naturally grounds to a time step in the decoder RNN. Therefore, the action sequence in Fig. 1(b) can be interpreted as unrolling RNN time steps, with solid arrows indicating RNN connections. The RNN maintains an internal state to track the generation process (§ 4.2.1), which will then be used to compute action probabilities p(at|x, a<t) (§ 4.2.2). 4.2.1 Tracking Generation States Our implementation of the decoder resembles a vanilla LSTM, with additional neural connections (parent feeding, Fig. 1(b)) to reflect the topological structure of an AST. The decoder’s internal hidden state at time step t, st, is given by: st = fLSTM([at−1 : ct : pt : nft], st−1), (3) sort my_list in descending ApplyRule[Call] Parent State + ApplyRule GenToken type of       ? order ... ... non­terminal variable terminal embedding of node type embedding of Figure 2: Illustration of a decoder time step (t = 9) where fLSTM(·) is the LSTM update function. [:] denotes vector concatenation. st will then be used to compute action probabilities p(at|x, a<t) in Eq. (2). Here, at−1 is the embedding of the previous action. ct is a context vector retrieved from input encodings {hi} via soft attention. pt is a vector that encodes the information of the parent action. nft denotes the node type embedding of the current frontier node nft 4. Intuitively, feeding the decoder the information of nft helps the model to keep track of the frontier node to expand. Action Embedding at We maintain two action embedding matrices, WR and WG. Each row in WR (WG) corresponds to an embedding vector for an action APPLYRULE[r] (GENTOKEN[v]). Context Vector ct The decoder RNN uses soft attention to retrieve a context vector ct from the input encodings {hi} pertain to the prediction of the current action. We follow Bahdanau et al. (2014) and use a Deep Neural Network (DNN) with a single hidden layer to compute attention weights. Parent Feeding pt Our decoder RNN uses additional neural connections to directly pass information from parent actions. For instance, when computing s9, the information from its parent action step t4 will be used. Formally, we define the parent action step pt as the time step at which the frontier node nft is generated. As an example, for t9, its parent action step p9 is t4, since nf9 is the node ?, which is generated at t4 by the APPLYRULE[Call7!. . .] action. We model parent information pt from two sources: (1) the hidden state of parent action spt, and (2) the embedding of parent action apt. pt is the concatenation. The parent feeding schema en4We maintain an embedding for each node type. 443 ables the model to utilize the information of parent code segments to make more confident predictions. Similar approaches of injecting parent information were also explored in the SEQ2TREE model in Dong and Lapata (2016)5. 4.2.2 Calculating Action Probabilities In this section we explain how action probabilities p(at|x, a<t) are computed based on st. APPLYRULE The probability of applying rule r as the current action at is given by a softmax6: p(at = APPLYRULE[r]|x, a<t) = softmax(WR · g(st))| · e(r) (4) where g(·) is a non-linearity tanh(W·st+b), and e(r) the one-hot vector for rule r. GENTOKEN As in § 3.2, a token v can be generated from a predefined vocabulary or copied from the input, defined as the marginal probability: p(at = GENTOKEN[v]|x, a<t) = p(gen|x, a<t)p(v|gen, x, a<t) + p(copy|x, a<t)p(v|copy, x, a<t). The selection probabilities p(gen|·) and p(copy|·) are given by softmax(WS · st). The probability of generating v from the vocabulary, p(v|gen, x, a<t), is defined similarly as Eq. (4), except that we use the GENTOKEN embedding matrix WG, and we concatenate the context vector ct with st as input. To model the copy probability, we follow recent advances in modeling copying mechanism in neural networks (Gu et al., 2016; Jia and Liang, 2016; Ling et al., 2016), and use a pointer network (Vinyals et al., 2015) to compute the probability of copying the i-th word from the input by attending to input representations {hi}: p(wi|copy, x, a<t) = exp(!(hi, st, ct)) Pn i0=1 exp(!(hi0, st, ct)), where !(·) is a DNN with a single hidden layer. Specifically, if wi is an OOV word (e.g., the variable name my list), which is represented by a special <unk> token during encoding, we then directly copy the actual word wi from the input description to the derivation. 4.3 Training and Inference Given a dataset of pairs of NL descriptions xi and code snippets ci, we parse ci into its AST yi and 5SEQ2TREE generates tree-structured outputs by conditioning on the hidden states of parent non-terminals, while our parent feeding uses the states of parent actions. 6We do not show bias terms for all softmax equations. Dataset HS DJANGO IFTTT Train 533 16,000 77,495 Development 66 1,000 5,171 Test 66 1,805 758 Avg. tokens in description 39.1 14.3 7.4 Avg. characters in code 360.3 41.1 62.2 Avg. size of AST (# nodes) 136.6 17.2 7.0 Statistics of Grammar w/o unary closure # productions 100 222 1009 # node types 61 96 828 terminal vocabulary size 1361 6733 0 Avg. # actions per example 173.4 20.3 5.0 w/ unary closure # productions 100 237 – # node types 57 92 – Avg. # actions per example 141.7 16.4 – Table 2: Statistics of datasets and associated grammars decompose yi into a sequence of oracle actions, which explains the generation story of yi under the grammar model. The model is then optimized by maximizing the log-likelihood of the oracle action sequence. At inference time, given an NL description, we use beam search to approximate the best AST ˆy in Eq. (1). See supplementary materials for the pseudo-code of the inference algorithm. 5 Experimental Evaluation 5.1 Datasets and Metrics HEARTHSTONE (HS) dataset (Ling et al., 2016) is a collection of Python classes that implement cards for the card game HearthStone. Each card comes with a set of fields (e.g., name, cost, and description), which we concatenate to create the input sequence. This dataset is relatively difficult: input descriptions are short, while the target code is in complex class structures, with each AST having 137 nodes on average. DJANGO dataset (Oda et al., 2015) is a collection of lines of code from the Django web framework, each with a manually annotated NL description. Compared with the HS dataset where card implementations are somewhat homogenous, examples in DJANGO are more diverse, spanning a wide variety of real-world use cases like string manipulation, IO operations, and exception handling. IFTTT dataset (Quirk et al., 2015) is a domainspecific benchmark that provides an interesting side comparison. Different from HS and DJANGO which are in a general-purpose PL, programs in IFTTT are written in a domain-specific language used by the IFTTT task automation 444 App. Users of the App write simple instructions (e.g., If Instagram.AnyNewPhotoByYou Then Dropbox.AddFileFromURL) with NL descriptions (e.g., “Autosave your Instagram photos to Dropbox”). Each statement inside the If or Then clause consists of a channel (e.g., Dropbox) and a function (e.g., AddFileFromURL)7. This simple structure results in much more concise ASTs (7 nodes on average). Because all examples are created by ordinary Apps users, the dataset is highly noisy, with input NL very loosely connected to target ASTs. The authors thus provide a high-quality filtered test set, where each example is verified by at least three annotators. We use this set for evaluation. Also note IFTTT’s grammar has more productions (Tab. 2), but this does not imply that its grammar is more complex. This is because for HS and DJANGO terminal tokens are generated by GENTOKEN actions, but for IFTTT, all the code is generated directly by APPLYRULE actions. Metrics As is standard in semantic parsing, we measure accuracy, the fraction of correctly generated examples. However, because generating an exact match for complex code structures is nontrivial, we follow Ling et al. (2016), and use tokenlevel BLEU-4 with as a secondary metric, defined as the averaged BLEU scores over all examples.8 5.2 Setup Preprocessing All input descriptions are tokenized using NLTK. We perform simple canonicalization for DJANGO, such as replacing quoted strings in the inputs with place holders. See supplementary materials for details. We extract unary closures whose frequency is larger than a threshold k (k = 30 for HS and 50 for DJANGO). Configuration The size of all embeddings is 128, except for node type embeddings, which is 64. The dimensions of RNN states and hidden layers are 256 and 50, respectively. Since our datasets are relatively small for a data-hungry neural model, we impose strong regularization using recurrent 7Like Beltagy and Quirk (2016), we strip function parameters since they are mostly specific to users. 8These two metrics are not ideal: accuracy only measures exact match and thus lacks the ability to give credit to semantically correct code that is different from the reference, while it is not clear whether BLEU provides an appropriate proxy for measuring semantics in the code generation task. A more intriguing metric would be directly measuring semantic/functional code equivalence, for which we present a pilot study at the end of this section (cf. Error Analysis). We leave exploring more sophisticated metrics (e.g. based on static code analysis) as future work. HS DJANGO ACC BLEU ACC BLEU Retrieval System† 0.0 62.5 14.7 18.6 Phrasal Statistical MT† 0.0 34.1 31.5 47.6 Hierarchical Statistical MT† 0.0 43.2 9.5 35.9 NMT 1.5 60.4 45.1 63.4 SEQ2TREE 1.5 53.4 28.9 44.6 SEQ2TREE–UNK 13.6 62.8 39.4 58.2 LPN† 4.5 65.6 62.3 77.6 Our system 16.2 75.8 71.6 84.5 Ablation Study – frontier embed. 16.7 75.8 70.7 83.8 – parent feed. 10.6 75.7 71.5 84.3 – copy terminals 3.0 65.7 32.3 61.7 + unary closure – 70.3 83.3 – unary closure 10.1 74.8 – Table 3: Results on two Python code generation tasks. †Results previously reported in Ling et al. (2016). dropouts (Gal and Ghahramani, 2016) for all recurrent networks, together with standard dropout layers added to the inputs and outputs of the decoder RNN. We validate the dropout probability from {0, 0.2, 0.3, 0.4}. For decoding, we use a beam size of 15. 5.3 Results Evaluation results for Python code generation tasks are listed in Tab. 3. Numbers for our systems are averaged over three runs. We compare primarily with two approaches: (1) Latent Predictor Network (LPN), a state-of-the-art sequenceto-sequence code generation model (Ling et al., 2016), and (2) SEQ2TREE, a neural semantic parsing model (Dong and Lapata, 2016). SEQ2TREE generates trees one node at a time, and the target grammar is not explicitly modeled a priori, but implicitly learned from data. We test both the original SEQ2TREE model released by the authors and our revised one (SEQ2TREE–UNK) that uses unknown word replacement to handle rare words (Luong et al., 2015). For completeness, we also compare with a strong neural machine translation (NMT) system (Neubig, 2015) using a standard encoder-decoder architecture with attention and unknown word replacement9, and include numbers from other baselines used in Ling et al. (2016). On the HS dataset, which has relatively large ASTs, we use unary closure for our model and SEQ2TREE, and for DJANGO we do not. 9For NMT, we also attempted to find the best-scoring syntactically correct predictions in the size-5 beam, but this did not yield a significant improvement over the NMT results in Tab. 3. 445 Figure 3: Performance w.r.t reference AST size on DJANGO Figure 4: Performance w.r.t reference AST size on HS System Comparison As in Tab. 3, our model registers 11.7% and 9.3% absolute improvements over LPN in accuracy on HS and DJANGO. This boost in performance strongly indicates the importance of modeling grammar in code generation. For the baselines, we find LPN outperforms NMT and SEQ2TREE in most cases. We also note that SEQ2TREE achieves a decent accuracy of 13.6% on HS, which is due to the effect of unknown word replacement, since we only achieved 1.5% without it. A closer comparison with SEQ2TREE is insightful for understanding the advantage of our syntax-driven approach, since both SEQ2TREE and our system output ASTs: (1) SEQ2TREE predicts one node each time step, and requires additional “dummy” nodes to mark the boundary of a subtree. The sheer number of nodes in target ASTs makes the prediction process error-prone. In contrast, the APPLYRULE actions of our grammar model allows for generating multiple nodes at a single time step. Empirically, we found that in HS, SEQ2TREE takes more than 300 time steps on average to generate a target AST, while our model takes only 170 steps. (2) SEQ2TREE does not directly use productions in the grammar, which possibly leads to grammatically incorrect ASTs and thus empty code outputs. We observe that the ratio of grammatically incorrect ASTs predicted by SEQ2TREE on HS and DJANGO are 21.2% and 10.9%, respectively, while our system guarantees grammaticality. Ablation Study We also ablated our bestperforming models to analyze the contribution of each component. “–frontier embed.” removes the frontier node embedding nft from the decoder RNN inputs (Eq. (3)). This yields worse results on DJANGO while gives slight improvements in acCHANNEL FULL TREE Classical Methods posclass (Quirk et al., 2015) 81.4 71.0 LR (Beltagy and Quirk, 2016) 88.8 82.5 Neural Network Methods NMT 87.7 77.7 NN (Beltagy and Quirk, 2016) 88.0 74.3 SEQ2TREE (Dong and Lapata, 2016) 89.7 78.4 Doubly-Recurrent NN 90.1 78.2 (Alvarez-Melis and Jaakkola, 2017) Our system 90.0 82.0 – parent feed. 89.9 81.1 – frontier embed. 90.1 78.7 Table 4: Results on the noise-filtered IFTTT test set of “>3 agree with gold annotations” (averaged over three runs), our model performs competitively among neural models. curacy on HS. This is probably because that the grammar of HS has fewer node types, and thus the RNN is able to keep track of nft without depending on its embedding. Next, “–parent feed.” removes the parent feeding mechanism. The accuracy drops significantly on HS, with a marginal deterioration on DJANGO. This result is interesting because it suggests that parent feeding is more important when the ASTs are larger, which will be the case when handling more complicated code generation tasks like HS. Finally, removing the pointer network (“–copy terminals”) in GENTOKEN actions gives poor results, indicating that it is important to directly copy variable names and values from the input. The results with and without unary closure demonstrate that, interestingly, it is effective on HS but not on DJANGO. We conjecture that this is because on HS it significantly reduces the number of actions from 173 to 142 (c.f., Tab. 2), with the number of productions in the grammar remaining unchanged. In contrast, DJANGO has a broader domain, and thus unary closure results in more productions in the grammar (237 for DJANGO vs. 100 for HS), increasing sparsity. Performance by the size of AST We further investigate our model’s performance w.r.t. the size of the gold-standard ASTs in Figs. 3 and 4. Not surprisingly, the performance drops when the size of the reference ASTs increases. Additionally, on the HS dataset, the BLEU score still remains at around 50 even when the size of ASTs grows to 200, indicating that our proposed syntax-driven approach is robust for long code segments. Domain Specific Code Generation Although this is not the focus of our work, evaluation on IFTTT brings us closer to a standard semantic parsing set446 input <name> Brawl </name> <cost> 5 </cost> <desc> Destroy all minions except one (chosen randomly) </desc> <rarity> Epic </rarity> ... pred. class Brawl(SpellCard): def init (self): super(). init (’Brawl’, 5, CHARACTER CLASS. WARRIOR, CARD RARITY.EPIC) def use(self, player, game): super().use(player, game) targets = copy.copy(game.other player.minions) targets.extend(player.minions) for minion in targets: minion.die(self) A ref. minions = copy.copy(player.minions) minions.extend(game.other player.minions) if len(minions) > 1: survivor = game.random choice(minions) for minion in minions: if minion is not survivor: minion.die(self) B input join app config.path and string ’locale’ into a file path, substitute it for localedir. pred. localedir = os.path.join( app config.path, ’locale’) 3 input self.plural is an lambda function with an argument n, which returns result of boolean expression n not equal to integer 1 pred. self.plural = lambda n: len(n) 7 ref. self.plural = lambda n: int(n!=1) Table 5: Predicted examples from HS (1st) and DJANGO. Copied contents (copy probability > 0.9) are highlighted. ting, which helps to investigate similarities and differences between generation of more complicated general-purpose code and and more limiteddomain simpler code. Tab. 4 shows the results, following the evaluation protocol in (Beltagy and Quirk, 2016) for accuracies at both channel and full parse tree (channel + function) levels. Our full model performs on par with existing neural network-based methods, while outperforming other neural models in full tree accuracy (82.0%). This score is close to the best classical method (LR), which is based on a logistic regression model with rich hand-engineered features (e.g., brown clusters and paraphrase). Also note that the performance between NMT and other neural models is much closer compared with the results in Tab. 3. This suggests that general-purpose code generation is more challenging than the simpler IFTTT setting, and therefore modeling structural information is more helpful. Case Studies We present output examples in Tab. 5. On HS, we observe that most of the time our model gives correct predictions by filling learned code templates from training data with arguments (e.g., cost) copied from input. This is in line with the findings in Ling et al. (2016). However, we do find interesting examples indicating that the model learns to generalize beyond trivial copying. For instance, the first example is one that our model predicted wrong — it generated code block A instead of the gold B (it also missed a function definition not shown here). However, we find that the block A actually conveys part of the input intent by destroying all, not some, of the minions. Since we are unable to find code block A in the training data, it is clear that the model has learned to generalize to some extent from multiple training card examples with similar semantics or structure. The next two examples are from DJANGO. The first one shows that the model learns the usage of common API calls (e.g., os.path.join), and how to populate the arguments by copying from inputs. The second example illustrates the difficulty of generating code with complex nested structures like lambda functions, a scenario worth further investigation in future studies. More examples are attached in supplementary materials. Error Analysis To understand the sources of errors and how good our evaluation metric (exact match) is, we randomly sampled and labeled 100 and 50 failed examples (with accuracy=0) from DJANGO and HS, respectively. We found that around 2% of these examples in the two datasets are actually semantically equivalent. These examples include: (1) using different parameter names when defining a function; (2) omitting (or adding) default values of parameters in function calls. While the rarity of such examples suggests that our exact match metric is reasonable, more advanced evaluation metrics based on statistical code analysis are definitely intriguing future work. For DJANGO, we found that 30% of failed cases were due to errors where the pointer network failed to appropriately copy a variable name into the correct position. 25% were because the generated code only partially implemented the required functionality. 10% and 5% of errors were due to malformed English inputs and pre-processing errors, respectively. The remaining 30% of examples were errors stemming from multiple sources, or errors that could not be easily categorized into the above. For HS, we found that all failed card examples were due to partial implementation errors, such as the one shown in Table 5. 6 Related Work Code Generation and Analysis Most works on code generation focus on generating code for domain specific languages (DSLs) (Kushman and 447 Barzilay, 2013; Raza et al., 2015; Manshadi et al., 2013), with neural network-based approaches recently explored (Liu et al., 2016; Parisotto et al., 2016; Balog et al., 2016). For general-purpose code generation, besides the general framework of Ling et al. (2016), existing methods often use language and task-specific rules and strategies (Lei et al., 2013; Raghothaman et al., 2016). A similar line is to use NL queries for code retrieval (Wei et al., 2015; Allamanis et al., 2015). The reverse task of generating NL summaries from source code has also been explored (Oda et al., 2015; Iyer et al., 2016). Finally, our work falls into the broad field of probabilistic modeling of source code (Maddison and Tarlow, 2014; Nguyen et al., 2013). Our approach of factoring an AST using probabilistic models is closely related to Allamanis et al. (2015), which uses a factorized model to measure the semantic relatedness between NL and ASTs for code retrieval, while our model tackles the more challenging generation task. Semantic Parsing Our work is related to the general topic of semantic parsing, which aims to transform NL descriptions into executable logical forms. The target logical forms can be viewed as DSLs. The parsing process is often guided by grammatical formalisms like combinatory categorical grammars (Kwiatkowski et al., 2013; Artzi et al., 2015), dependency-based syntax (Liang et al., 2011; Pasupat and Liang, 2015) or taskspecific formalisms (Clarke et al., 2010; Yih et al., 2015; Krishnamurthy et al., 2016; Mei et al., 2016). Recently, there are efforts in designing neural network-based semantic parsers (Misra and Artzi, 2016; Dong and Lapata, 2016; Neelakantan et al., 2016; Yin et al., 2016). Several approaches have be proposed to utilize grammar knowledge in a neural parser, such as augmenting the training data by generating examples guided by the grammar (Kocisk´y et al., 2016; Jia and Liang, 2016). Liang et al. (2016) used a neural decoder which constrains the space of next valid tokens in the query language for question answering. Finally, the structured prediction approach proposed by Xiao et al. (2016) is closely related to our model in using the underlying grammar as prior knowledge to constrain the generation process of derivation trees, while our method is based on a unified grammar model which jointly captures production rule application and terminal symbol generation, and scales to general purpose code generation tasks. 7 Conclusion This paper proposes a syntax-driven neural code generation approach that generates an abstract syntax tree by sequentially applying actions from a grammar model. Experiments on both code generation and semantic parsing tasks demonstrate the effectiveness of our proposed approach. Acknowledgment We are grateful to Wang Ling for his generous help with LPN and setting up the benchmark. We thank I. Beltagy for providing the IFTTT dataset. We also thank Li Dong for helping with SEQ2TREE and insightful discussions. References Miltiadis Allamanis, Daniel Tarlow, Andrew D. Gordon, and Yi Wei. 2015. Bimodal modelling of source code and natural language. In Proceedings of ICML. volume 37. David Alvarez-Melis and Tommi S. Jaakkola. 2017. Tree-structured decoding with doubly recurrent neural networks. In Proceedings of ICLR. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of EMNLP. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transaction of ACL 1(1). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. 2016. Deepcoder: Learning to write programs. CoRR abs/1611.01989. Robert Balzer. 1985. A 15 year perspective on automatic programming. IEEE Trans. Software Eng. 11(11). Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of LAW-ID@ACL. I. Beltagy and Chris Quirk. 2016. Improved semantic parsers for if-then statements. In Proceedings of ACL. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of EMNLP. 448 Joel Brandt, Mira Dontcheva, Marcos Weskamp, and Scott R. Klemmer. 2010. Example-centric programming: integrating web search into the development environment. In Proceedings of CHI. Joel Brandt, Philip J. Guo, Joel Lewenstein, Mira Dontcheva, and Scott R. Klemmer. 2009. Two studies of opportunistic programming: interleaving web foraging, learning, and writing code. In Proceedings of CHI. Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics 33(4). James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of CoNLL. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of ACL. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Proceedings of NIPS. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of ACL. Tihomir Gvero and Viktor Kuncak. 2015. Interactive synthesis using free-form queries. In Proceedings of ICSE. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8). Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of ACL. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of ACL. Tom´as Kocisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Proceedings of EMNLP. Jayant Krishnamurthy, Oyvind Tafjord, and Aniruddha Kembhavi. 2016. Semantic parsing to probabilistic programs for situated question answering. In Proceedings of EMNLP. Nate Kushman and Regina Barzilay. 2013. Using semantic unification to generate regular expressions from natural language. In Proceedings of NAACL. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke S. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the EMNLP. Tao Lei, Fan Long, Regina Barzilay, and Martin C. Rinard. 2013. From natural language specifications to program input parsers. In Proceedings of ACL. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2016. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. CoRR abs/1611.00020. Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of ACL. Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´as Kocisk´y, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of ACL. Greg Little and Robert C. Miller. 2009. Keyword programming in java. Autom. Softw. Eng. 16(1). Chang Liu, Xinyun Chen, Eui Chul Richard Shin, Mingcheng Chen, and Dawn Xiaodong Song. 2016. Latent attention for if-then program synthesis. In Proceedings of NIPS. Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of ACL. Chris J. Maddison and Daniel Tarlow. 2014. Structured generative models of natural source code. In Proceedings of ICML. Mehdi Hafezi Manshadi, Daniel Gildea, and James F. Allen. 2013. Integrating programming by example and natural language programming. In Proceedings of AAAI. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Proceedings of AAAI. Dipendra K. Misra and Yoav Artzi. 2016. Neural shiftreduce CCG semantic parsing. In Proceedings of EMNLP. Dipendra Kumar Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. 2015. Environment-driven lexicon induction for high-level instructions. In Proceedings of ACL. Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. 2016. Neural programmer: Inducing latent programs with gradient descent. In Proceedings of ICLR. Graham Neubig. 2015. lamtram: A toolkit for language and translation modeling using neural networks. http://www.github.com/neubig/lamtram. Tung Thanh Nguyen, Anh Tuan Nguyen, Hoan Anh Nguyen, and Tien N. Nguyen. 2013. A statistical semantic language model for source code. In Proceedings of ACM SIGSOFT. 449 Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical machine translation (T). In Proceedings of ASE. Emilio Parisotto, Abdel-rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, and Pushmeet Kohli. 2016. Neuro-symbolic program synthesis. CoRR abs/1611.01855. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of ACL. Python Software Foundation. 2016. Python abstract grammar. https://docs.python.org/2/library/ast.html. Chris Quirk, Raymond J. Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of ACL. Mukund Raghothaman, Yi Wei, and Youssef Hamadi. 2016. SWIM: synthesizing what i mean: code search and idiomatic snippet synthesis. In Proceedings of ICSE. Mohammad Raza, Sumit Gulwani, and Natasa MilicFrayling. 2015. Compositional program synthesis from natural language and examples. In Proceedings of IJCAI. Lappoon R. Tang and Raymond J. Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In Proceedings of ECML. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proceedings of NIPS. Yi Wei, Nirupama Chandrasekaran, Sumit Gulwani, and Youssef Hamadi. 2015. Building bing developer assistant. Technical report. https://www.microsoft.com/enus/research/publication/building-bing-developerassistant/. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proceedings of ACL. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of ACL. Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2016. Neural enquirer: Learning to query tables in natural language. In Proceedings of IJCAI. Luke Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form structured classification with probabilistic categorial grammars. In Proceedings of UAI. 450
2017
41
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 451–462 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1042 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 451–462 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1042 Learning bilingual word embeddings with (almost) no bilingual data Mikel Artetxe Gorka Labaka Eneko Agirre IXA NLP group University of the Basque Country (UPV/EHU) {mikel.artetxe,gorka.labaka,e.agirre}@ehu.eus Abstract Most methods to learn bilingual word embeddings rely on large parallel corpora, which is difficult to obtain for most language pairs. This has motivated an active research line to relax this requirement, with methods that use document-aligned corpora or bilingual dictionaries of a few thousand words instead. In this work, we further reduce the need of bilingual resources using a very simple self-learning approach that can be combined with any dictionary-based mapping technique. Our method exploits the structural similarity of embedding spaces, and works with as little bilingual evidence as a 25 word dictionary or even an automatically generated list of numerals, obtaining results comparable to those of systems that use richer resources. 1 Introduction Multilingual word embeddings have attracted a lot of attention in recent times. In addition to having a direct application in inherently crosslingual tasks like machine translation (Zou et al., 2013) and crosslingual entity linking (Tsai and Roth, 2016), they provide an excellent mechanism for transfer learning, where a model trained in a resource-rich language is transferred to a less-resourced one, as shown with part-of-speech tagging (Zhang et al., 2016), parsing (Xiao and Guo, 2014) and document classification (Klementiev et al., 2012). Most methods to learn these multilingual word embeddings make use of large parallel corpora (Gouws et al., 2015; Luong et al., 2015), but there have been several proposals to relax this requirement, given its scarcity in most language pairs. A possible relaxation is to use document-aligned or label-aligned comparable corpora (Søgaard et al., 2015; Vuli´c and Moens, 2016; Mogadala and Rettinger, 2016), but large amounts of such corpora are not always available for some language pairs. An alternative approach that we follow here is to independently train the embeddings for each language on monolingual corpora, and then learn a linear transformation to map the embeddings from one space into the other by minimizing the distances in a bilingual dictionary, usually in the range of a few thousand entries (Mikolov et al., 2013a; Artetxe et al., 2016). However, dictionaries of that size are not readily available for many language pairs, specially those involving less-resourced languages. In this work, we reduce the need of large bilingual dictionaries to much smaller seed dictionaries. Our method can work with as little as 25 word pairs, which are straightforward to obtain assuming some basic knowledge of the languages involved. The method can also work with trivially generated seed dictionaries of numerals (i.e. 1-1, 2-2, 3-3, 4-4...) making it possible to learn bilingual word embeddings without any real bilingual data. In either case, we obtain very competitive results, comparable to other state-of-the-art methods that make use of much richer bilingual resources. The proposed method is an extension of existing mapping techniques, where the dictionary is used to learn the embedding mapping and the embedding mapping is used to induce a new dictionary iteratively in a self-learning fashion (see Figure 1). In spite of its simplicity, our analysis of the implicit optimization objective reveals that the method is exploiting the structural similarity of independently trained embeddings. We analyze previous work in Section 2. Section 3 describes the self-learning framework, while Section 4 presents the experiments. Section 5 analyzes the underlying optimization objective, and Section 6 presents an error analysis. 451 Learn D using nearest neighbor D = 1-a, 2-b, 3-c, 4-x, 5-y XW and Z in same space W Learn W using D and rotate X D = 1-a, 2-b, 3-c. XW Z X Figure 1: A general schema of the proposed self-learning framework. Previous works learn a mapping W based on the seed dictionary D, which is then used to learn the full dictionary. In our proposal we use the new dictionary to learn a new mapping, iterating until convergence. 2 Related work We will first focus on bilingual embedding mappings, which are the basis of our proposals, and then on other unsupervised and weakly supervised methods to learn bilingual word embeddings. 2.1 Bilingual embedding mappings Methods to induce bilingual mappings work by independently learning the embeddings in each language using monolingual corpora, and then learning a transformation from one embedding space into the other based on a bilingual dictionary. The first of such methods is due to Mikolov et al. (2013a), who learn the linear transformation that minimizes the sum of squared Euclidean distances for the dictionary entries. The same optimization objective is used by Zhang et al. (2016), who constrain the transformation matrix to be orthogonal. Xing et al. (2015) incorporate length normalization in the training of word embeddings and maximize the cosine similarity instead, enforcing the orthogonality constraint to preserve the length normalization after the mapping. Finally, Lazaridou et al. (2015) use max-margin optimization with intruder negative sampling. Instead of learning a single linear transformation from the source language into the target language, Faruqui and Dyer (2014) use canonical correlation analysis to map both languages to a shared vector space. Lu et al. (2015) extend this work and apply deep canonical correlation analysis to learn non-linear transformations. Artetxe et al. (2016) propose a general framework that clarifies the relation between Mikolov et al. (2013a), Xing et al. (2015), Faruqui and Dyer (2014) and Zhang et al. (2016) as variants of the same core optimization objective, and show that a new variant is able to surpass them all. While most of the previous methods use gradient descent, Artetxe et al. (2016) propose an efficient analytical implementation for those same methods, recently extended by Smith et al. (2017) to incorporate dimensionality reduction. A prominent application of bilingual embedding mappings, with a direct application in machine translation (Zhao et al., 2015), is bilingual lexicon extraction, which is also the main evaluation method. More specifically, the learned mapping is used to induce the translation of source language words that were missing in the original dictionary, usually by taking their nearest neighbor word in the target language according to cosine similarity, although Dinu et al. (2015) and Smith et al. (2017) propose alternative retrieval methods to address the hubness problem. 2.2 Unsupervised and weakly supervised bilingual embeddings As mentioned before, our method works with as little as 25 word pairs, while the methods discussed previously use thousands of pairs. The only exception in this regard is the work by Zhang et al. (2016), who only use 10 word pairs with good results on transfer learning for part-of-speech tagging. Our experiments will show that, although their method captures coarse-grained relations, it fails on finer-grained tasks like bilingual lexicon induction. Bootstrapping methods similar to ours have been previously proposed for traditional countbased vector space models (Peirsman and Pad´o, 2010; Vuli´c and Moens, 2013). However, while previous techniques incrementally build a high452 Algorithm 1 Traditional framework Input: X (source embeddings) Input: Z (target embeddings) Input: D (seed dictionary) 1: W ←LEARN MAPPING(X, Z, D) 2: D ←LEARN DICTIONARY(X, Z, W) 3: EVALUATE DICTIONARY(D) dimensional model where each axis encodes the co-occurrences with a specific word and its equivalent in the other language, our method works with low-dimensional pre-trained word embeddings, which are more widely used nowadays. A practical aspect for reducing the need of bilingual supervision is on the design of the seed dictionary. This is analyzed in depth by Vuli´c and Korhonen (2016), who propose using documentaligned corpora to extract the training dictionary. A more common approach is to rely on shared words and cognates (Peirsman and Pad´o, 2010; Smith et al., 2017), eliminating the need of bilingual data in practice. Our use of shared numerals exploits the same underlying idea, but relies on even less bilingual evidence and should thus generalize better to distant language pairs. Miceli Barone (2016) and Cao et al. (2016) go one step further and attempt to learn bilingual embeddings without any bilingual evidence. The former uses adversarial autoencoders (Makhzani et al., 2016), combining an encoder that maps the source language embeddings into the target language, a decoder that reconstructs the original embeddings, and a discriminator that distinguishes mapped embeddings from real target language embeddings, whereas the latter adds a regularization term to the training of word embeddings that pushes the mean and variance of each dimension in different languages close to each other. Although promising, the reported performance in both cases is poor in comparison to other methods. Finally, the induction of bilingual knowledge from monolingual corpora is closely related to the decipherment scenario, for which models that incorporate word embeddings have also been proposed (Dou et al., 2015). However, decipherment is only concerned with translating text from one language to another and relies on complex statistical models that are designed specifically for that purpose, while our approach is more general and learns task-independent multilingual embeddings. Algorithm 2 Proposed self-learning framework Input: X (source embeddings) Input: Z (target embeddings) Input: D (seed dictionary) 1: repeat 2: W ←LEARN MAPPING(X, Z, D) 3: D ←LEARN DICTIONARY(X, Z, W) 4: until convergence criterion 5: EVALUATE DICTIONARY(D) 3 Proposed self-learning framework As discussed in Section 2.1, a common evaluation task (and practical application) of bilingual embedding mappings is to induce bilingual lexicons, that is, to obtain the translation of source words that were missing in the training dictionary, which are then compared to a gold standard test dictionary for evaluation. This way, one can say that the seed (train) dictionary is used to learn a mapping, which is then used to induce a better dictionary (at least in the sense that it is larger). Algorithm 1 summarizes this framework. Following this observation, we propose to use the output dictionary in Algorithm 1 as the input of the same system in a self-learning fashion which, assuming that the output dictionary was indeed better than the original one, should serve to learn a better mapping and, consequently, an even better dictionary the second time. The process can then be repeated iteratively to obtain a hopefully better mapping and dictionary each time until some convergence criterion is met. Algorithm 2 summarizes this alternative framework that we propose. Our method can be combined with any embedding mapping and dictionary induction technique (see Section 2.1). However, efficiency turns out to be critical for a variety of reasons. First of all, by enclosing the learning logic in a loop, the total training time is increased by the number of iterations. Even more importantly, our framework requires to explicitly build the entire dictionary at each iteration, whereas previous work tends to induce the translation of individual words ondemand later at runtime. Moreover, from the second iteration onwards, it is this induced, full dictionary that has to be used to learn the embedding mapping, and not the considerably smaller seed dictionary as it is typically done. In the following two subsections, we respectively describe the embedding mapping method and the dictionary in453 duction method that we adopt in our work with these efficiency requirements in mind. 3.1 Embedding mapping As discussed in Section 2.1, most previous methods to learn embedding mappings use variants of gradient descent. Among the more efficient exact alternatives, we decide to adopt the one by Artetxe et al. (2016) for its simplicity and good results as reported in their paper. We next present their method, adapting the formalization to explicitly incorporate the dictionary as required by our self-learning algorithm. Let X and Z denote the word embedding matrices in two languages so that Xi∗corresponds to the ith source language word embedding and Zj∗ corresponds to the jth target language embedding. While Artetxe et al. (2016) assume these two matrices are aligned according to the dictionary, we drop this assumption and represent the dictionary explicitly as a binary matrix D, so that Dij = 1 if the ith source language word is aligned with the jth target language word. The goal is then to find the optimal mapping matrix W ∗so that the sum of squared Euclidean distances between the mapped source embeddings Xi∗W and target embeddings Zj∗for the dictionary entries Dij is minimized: W ∗= arg min W X i X j Dij||Xi∗W −Zj∗||2 Following Artetxe et al. (2016), we length normalize and mean center the embedding matrices X and Z in a preprocessing step, and constrain W to be an orthogonal matrix (i.e. WW T = W T W = I), which serves to enforce monolingual invariance, preventing a degradation in monolingual performance while yielding to better bilingual mappings. Under such orthogonality constraint, minimizing the squared Euclidean distance becomes equivalent to maximizing the dot product, so the above optimization objective can be reformulated as follows: W ∗= arg max W Tr XWZT DT  where Tr (·) denotes the trace operator (the sum of all the elements in the main diagonal). The optimal orthogonal solution for this problem is given by W ∗= UV T , where XT DZ = UΣV T is the singular value decomposition of XT DZ. Since the dictionary matrix D is sparse, this can be efficiently computed in linear time with respect to the number of dictionary entries. 3.2 Dictionary induction As discussed in Section 2.1, practically all previous work uses nearest neighbor retrieval for word translation induction based on embedding mappings. In nearest neighbor retrieval, each source language word is assigned the closest word in the target language. In our work, we use the dot product between the mapped source language embeddings and the target language embeddings as the similarity measure, which is roughly equivalent to cosine similarity given that we apply length normalization followed by mean centering as a preprocessing step (see Section 3.1). This way, following the notation in Section 3.1, we set Dij = 1 if j = argmaxk (Xi∗W) · Zk∗and Dij = 0 otherwise1. While we find that independently computing the similarity measure between all word pairs is prohibitively slow, the computation of the entire similarity matrix XWZT can be easily vectorized using popular linear algebra libraries, obtaining big performance gains. However, the resulting similarity matrix is often too large to fit in memory when using large vocabularies. For that reason, instead of computing the entire similarity matrix XWZT in a single step, we iteratively compute submatrices of it using vectorized matrix multiplication, find their corresponding maxima each time, and then combine the results. 4 Experiments and results In this section, we experimentally test the proposed method in bilingual lexicon induction and crosslingual word similarity. Subsection 4.1 describes the experimental settings, while Subsections 4.2 and 4.3 present the results obtained in each of the tasks. The code and resources necessary to reproduce our experiments are available at https://github.com/artetxem/ vecmap. 4.1 Experimental settings For easier comparison with related work, we evaluated our mappings on bilingual lexicon induction using the public English-Italian dataset by Dinu et al. (2015), which includes monolingual word embeddings in both languages together with a bilingual dictionary split in a training set and a 1Note that we induce the dictionary entries starting from the source language words. We experimented with other alternatives in development, with minor differences. 454 test set2. The embeddings were trained with the word2vec toolkit with CBOW and negative sampling (Mikolov et al., 2013b)3, using a 2.8 billion word corpus for English (ukWaC + Wikipedia + BNC) and a 1.6 billion word corpus for Italian (itWaC). The training and test sets were derived from a dictionary built form Europarl word alignments and available at OPUS (Tiedemann, 2012), taking 1,500 random entries uniformly distributed in 5 frequency bins as the test set and the 5,000 most frequent of the remaining word pairs as the training set. In addition to English-Italian, we selected two other languages from different language families with publicly available resources. We thus created analogous datasets for English-German and English-Finnish. In the case of German, the embeddings were trained on the 0.9 billion word corpus SdeWaC, which is part of the WaCky collection (Baroni et al., 2009) that was also used for English and Italian. Given that Finnish is not included in this collection, we used the 2.8 billion word Common Crawl corpus provided at WMT 20164 instead, which we tokenized using the Stanford Tokenizer (Manning et al., 2014). In addition to that, we created training and test sets for both pairs from their respective Europarl dictionaries from OPUS following the exact same procedure used for English-Italian, and the word embeddings were also trained using the same configuration as Dinu et al. (2015). Given that the main focus of our work is on small seed dictionaries, we created random subsets of 2,500, 1,000, 500, 250, 100, 75, 50 and 25 entries from the original training dictionaries of 5,000 entries. This was done by shuffling once the training dictionaries and taking their first k entries, so it is guaranteed that each dictionary is a strict subset of the bigger dictionaries. In addition to that, we explored using automatically generated dictionaries as a shortcut to practical unsupervised learning. For that purpose, we created numeral dictionaries, consisting of words matching the [0-9]+ regular expression in both vocabularies (e.g. 1-1, 2-2, 3-3, 1992-1992 2http://clic.cimec.unitn.it/ ˜georgiana.dinu/down/ 3The context window was set to 5 words, the dimension of the embeddings to 300, the sub-sampling to 1e-05 and the number of negative samples to 10, and the vocabulary was restricted to the 200,000 most frequent words 4http://www.statmt.org/wmt16/ translation-task.html etc.). The resulting dictionary had 2772 entries for English-Italian, 2148 for English-German, and 2345 for English-Finnish. While more sophisticated approaches are possible (e.g. involving the edit distance of all words), we believe that this method is general enough that should work with practically any language pair, as Arabic numerals are often used even in languages with a different writing system (e.g. Chinese and Russian). While bilingual lexicon induction is a standard evaluation task for seed dictionary based methods like ours, it is unsuitable for bilingual corpus based methods, as statistical word alignment already provides a reliable way to derive dictionaries from bilingual corpora and, in fact, this is how the test dictionary itself is built in our case. For that reason, we carried out some experiments in crosslingual word similarity as a way to test our method in a different task and allowing to compare it to systems that use richer bilingual data. There are no many crosslingual word similarity datasets, and we used the RG-65 and WordSim353 crosslingual datasets for English-German and the WordSim-353 crosslingual dataset for EnglishItalian as published by Camacho-Collados et al. (2015) 5. As for the convergence criterion, we decide to stop training when the improvement on the average dot product for the induced dictionary falls below a given threshold from one iteration to the next. After length normalization, the dot product ranges from -1 to 1, so we decide to set this threshold at 1e-6, which we find to be a very conservative value yet enough that training takes a reasonable amount of time. The curves in the next section confirm that this was a reasonable choice. This convergence criterion is usually met in less than 100 iterations, each of them taking 5 minutes on a modest desktop computer (Intel Core i5-4670 CPU with 8GiB of RAM), including the induction of a dictionary of 200,000 words at each iteration. 4.2 Bilingual lexicon induction For the experiments on bilingual lexicon induction, we compared our method with those proposed by Mikolov et al. (2013a), Xing et al. (2015), Zhang et al. (2016) and Artetxe et al. (2016), all of them implemented as part of the framework proposed by the latter. The results ob5http://lcl.uniroma1.it/ similarity-datasets/ 455 English-Italian English-German English-Finnish 5,000 25 num. 5,000 25 num. 5,000 25 num. Mikolov et al. (2013a) 34.93 0.00 0.00 35.00 0.00 0.07 25.91 0.00 0.00 Xing et al. (2015) 36.87 0.00 0.13 41.27 0.07 0.53 28.23 0.07 0.56 Zhang et al. (2016) 36.73 0.07 0.27 40.80 0.13 0.87 28.16 0.14 0.42 Artetxe et al. (2016) 39.27 0.07 0.40 41.87 0.13 0.73 30.62 0.21 0.77 Our method 39.67 37.27 39.40 40.87 39.60 40.27 28.72 28.16 26.47 Table 1: Accuracy (%) on bilingual lexicon induction for different seed dictionaries tained with the 5,000 entry, 25 entry and the numerals dictionaries for all the 3 language pairs are given in Table 1. The results for the 5,000 entry dictionaries show that our method is comparable or even better than the other systems. As another reference, the best published results using nearest-neighbor retrieval are due to Lazaridou et al. (2015), who report an accuracy of 40.20% for the full EnglishItalian dictionary, almost at pair with our system (39.67%). In any case, the main focus of our work is on smaller dictionaries, and it is under this setting that our method really stands out. The 25 entry and numerals columns in Table 1 show the results for this setting, where all previous methods drop dramatically, falling below 1% accuracy in all cases. The method by Zhang et al. (2016) also obtains poor results with small dictionaries, which reinforces our hypothesis in Section 2.2 that their method can only capture coarse-grain bilingual relations for small dictionaries. In contrast, our proposed method obtains very competitive results for all dictionaries, with a difference of only 1-2 points between the full dictionary and both the 25 entry dictionary and the numerals dictionary in all three languages. Figure 2 shows the curve of the English-Italian accuracy for different seed dictionary sizes, confirming this trend. Finally, it is worth mentioning that, even if all the three language pairs show the same general behavior, there are clear differences in their absolute accuracy numbers, which can be attributed to the linguistic proximity of the languages involved. In particular, the results for English-Finnish are about 10 points below the rest, which is explained by the fact that Finnish is a non-indoeuropean agglutinative language, making the task considerably more difficult for this language pair. In this regard, we believe that the good results with small dictionaries are a strong indication of the robustness of our method, showing that it is able to learn good bilingual mappings from very little bilingual evidence even for distant language pairs where the structural similarity of the embedding spaces is presumably weaker. 4.3 Crosslingual word similarity In addition to the baseline systems in Section 4.2, in the crosslingual similarity experiments we also tested the method by Luong et al. (2015), which is the state-of-the-art for bilingual word embeddings based on parallel corpora (Upadhyay et al., 2016)6. As this method is an extension of word2vec, we used the same hyperparameters as for the monolingual embeddings when possible (see Section 4.1), and leave the default ones otherwise. We used Europarl as our parallel corpus to train this method as done by the authors, which consists of nearly 2 million parallel sentences. As shown in the results in Table 2, our method obtains the best results in all cases, surpassing the rest of the dictionary-based methods by 1-3 points depending on the dataset. But, most importantly, it does not suffer from any significant degradation for using smaller dictionaries and, in fact, our method gets better results using the 25 entry dictionary or the numeral list as the only bilingual evidence than any of the baseline systems using much richer resources. The relatively poor results of Luong et al. (2015) can be attributed to the fact that the dictionary based methods make use of much bigger monolingual corpora, while methods based on parallel corpora are restricted to smaller corpora. However, it is not clear how to introduce monolingual corpora on those methods. We did run some experiments with BilBOWA (Gouws et al., 2015), which supports training in monolingual corpora in addition to bilingual corpora, but obtained very poor results7. All in all, our experiments show 6We also tested English-German pre-trained embeddings from Klementiev et al. (2012) and Chandar A P et al. (2014). They both had coverage problems that made the results hard to compare, and, when considering the correlations for the word pairs in their vocabulary, their performance was poor. 7Upadhyay et al. (2016) report similar problems using 456 G G G G G G G G G 0 10 20 30 40 0 1000 2000 3000 4000 5000 Seed dictionary size Accuracy (%) Method G Our method Artetxe et al. (2016) Xing et al. (2015) Zhang et al. (2016) Mikolov et al. (2013a) Figure 2: Accuracy on English-Italian bilingual lexicon induction for different seed dictionaries that it is better to use large monolingual corpora in combination with very little bilingual data rather than a bilingual corpus of a standard size alone. 5 Global optimization objective It might seem somehow surprising at first that, as seen in the previous section, our simple selflearning approach is able to learn high quality bilingual embeddings from small seed dictionaries instead of falling in degenerated solutions. In this section, we try to shed light on our approach, and give empirical evidence supporting our claim. More concretely, we argue that, for the embedding mapping and dictionary induction methods described in Section 3, the proposed selflearning framework is implicitly solving the following global optimization problem8: W ∗= arg max W X i max j (Xi∗W) · Zj∗ s.t. WW T = W T W = I Contrary to the optimization objective for W in Section 3.1, the global optimization objective does not refer to any dictionary, and maximizes the similarity between each source language word and its closest target language word. Intuitively, a random solution would map source language embeddings to seemingly random locations in the target language space, and it would thus be unlikely that BilBOWA. 8While we restrict our formal analysis to the embedding mapping and dictionary induction method that we use, the general reasoning should be valid for other choices as well. IT DE Bi. data WS RG WS Luong et al. (2015) Europarl .331 .335 .356 Mikolov et al. (2013a) 5k dict .627 .643 .528 Xing et al. (2015) 5k dict .614 .700 .595 Zhang et al. (2016) 5k dict .616 .704 .596 Artetxe et al. (2016) 5k dict .617 .716 .597 Our method 5k dict .624 .742 .616 25 dict .626 .749 .612 num. .628 .739 .604 Table 2: Spearman correlations on English-Italian and English-German crosslingual word similarity they have any target language word nearby, making the optimization value small. In contrast, a good solution would map source language words close to their translation equivalents in the target language space, and they would thus have their corresponding embeddings nearby, making the optimization value large. While it is certainly possible to build degenerated solutions that take high optimization values for small subsets of the vocabulary, we think that the structural similarity between independently trained embedding spaces in different languages is strong enough that optimizing this function yields to meaningful bilingual mappings when the size of the vocabulary is much larger than the dimensionality of the embeddings. The reasoning for how the self-learning framework is optimizing this objective is as follows. At the end of each iteration, the dictionary D is updated to assign, for the current mapping W, each source language word to its closest target language word. This way, when we update W to maximize the average similarity of these dictionary entries at the beginning of the next iteration, it is guaranteed that the value of the optimization objective will improve (or at least remain the same). The reason is that the average similarity between each word and what were previously the closest words will be improved if possible, as this is what the updated W directly optimizes (see Section 3.1). In addition to that, it is also possible that, for some source words, some other target words get closer after the update. Thanks to this, our self-learning algorithm is guaranteed to converge to a local optimum of the above global objective, behaving like an alternating optimization algorithm for it. It is interesting to note that the above reasoning is valid no matter what the the initial solution is, and, in fact, the global optimization objective does not depend on the seed dictionary nor any other 457 0.25 0.30 0.35 0.40 0.45 10 20 30 40 Iteration Objective function Seed dict. 5,000 2,500 1,000 500 250 100 75 50 25 num. none 0 10 20 30 40 10 20 30 40 Iteration Accuracy (%) Seed dict. 5,000 2,500 1,000 500 250 100 75 50 25 num. none Figure 3: Learning curve on English-Italian according to the global objective function (left) and the accuracy on bilingual lexicon induction (right) bilingual resource. For that reason, it should be possible to use a random initialization instead of a small seed dictionary. However, we empirically observe that this works poorly in practice, as our algorithm tends to get stuck in poor local optima when the initial solution is not good enough. The general behavior of our method is reflected in Figure 3, which shows the learning curve for different seed dictionaries according to both the objective function and the accuracy on bilingual lexicon induction. As it can be seen, the objective function is improved from iteration to iteration and converges to a local optimum just as expected. At the same time, the learning curves show a strong correlation between the optimization objective and the accuracy, as it can be clearly observed that improving the former leads to an improvement of the latter, confirming our explanations. Regarding random initialization, the figure shows that the algorithm gets stuck in a poor local optimum of the objective function, which is the reason of the bad performance (0% accuracy) on bilingual lexicon induction, but the proposed optimization objective itself seems to be adequate. Finally, we empirically observe that our algorithm learns similar mappings no matter what the seed dictionary was. We first repeated our experiments on English-Italian bilingual lexicon induction for 5 different dictionaries of 25 entries, obtaining an average accuracy of 38.15% and a standard deviation of only 0.75%. In addition to that, we observe that the overlap between the predictions made when starting with the full dictionary and the numerals dictionary is 76.00% (60.00% for the 25 entry dictionary). At the same time, 37.00% of the test cases are correctly solved by both instances, and it is only 5.07% of the test cases that one of them gets right and the other wrong (34.00% and 8.94% for the 25 entry dictionary). This suggests that our algorithm tends to converge to similar solutions even for disjoint seed dictionaries, which is in line with our view that we are implicitly optimizing an objective that is independent from the seed dictionary, yet a seed dictionary is necessary to build a good enough initial solution to avoid getting stuck in poor local optima. For that reason, it is likely that better methods to tackle this optimization problem would allow learning bilingual word embeddings without any bilingual evidence at all and, in this regard, we believe that our work opens exciting opportunities for future research. 6 Error analysis So as to better understand the behavior of our system, we performed an error analysis of its output in English-Italian bilingual lexicon induction when starting with the 5,000 entry, the 25 entry and the numeral dictionaries in comparison with the baseline method of Artetxe et al. (2016) with the 5,000 entry dictionary. For that purpose, we took 100 random examples from the test set in the [1-5K] frequency bin, another 100 from the [5K20K] frequency bin and 30 from the [100K-200K] frequency bin, and manually analyzed each of the errors made by all the 4 different variants. Our analysis first reveals that, in all the cases, about a third of the translations taken as erroneous according to the gold standard are not so in real458 ity. This corresponds to both different morphological variants of the gold standard translations (e.g. dichiarato/dichiar`o) and other valid translations that were missing in the gold standard (e.g. climb →salita instead of the gold standard scalato). This phenomenon is considerably more pronounced in the first frequency bins, which already have a much higher accuracy according to the gold standard. As for the actual errors, we observe that nearly a third of them correspond to named entities for all the different variants. Interestingly, the vast majority of the proposed translations in these cases are also named entities (e.g. Ryan →Jason, John → Paolo), which are often highly related to the original ones (e.g. Volvo →BMW, Olympus →Nikon). While these are clear errors, it is understandable that these methods are unable to discriminate between named entities to this degree based solely on the distributional hypothesis, in particular when it comes to common proper names (e.g. John, Andy), and one could design alternative strategies to address this issue like taking the edit distance as an additional signal. For the remaining errors, all systems tend to propose translations that have some degree of relationship with the correct ones, including nearsynonyms (e.g. guidelines →raccomandazioni), antonyms (e.g. sender →destinatario) and words in the same semantic field (e.g. nominalism →intuizionismo / innatismo, which are all philosophical doctrines). However, there are also a few instances where the relationship is weak or unclear (e.g. loch →giardini, sweep →serrare). We also observe a few errors that are related to multiwords or collocations (e.g. carrier →aereo, presumably related to the multiword air carrier / linea aerea), as well as some rare word that is repeated across many translations (Ferruzzi), which could be attributed to the hubness problem (Dinu et al., 2015; Lazaridou et al., 2015). All in all, our error analysis reveals that the baseline method of Artetxe et al. (2016) and the proposed algorithm tend to make the same kind of errors regardless of the seed dictionary used by the latter, which reinforces our interpretation in the previous section regarding an underlying optimization objective that is independent from any training dictionary. Moreover, it shows that the quality of the learned mappings is much better than what the raw accuracy numbers might suggest, encouraging the incorporation of these techniques in other applications. 7 Conclusions and future work In this work, we propose a simple self-learning framework to learn bilingual word embedding mappings in combination with any embedding mapping and dictionary induction technique. Our experiments on bilingual lexicon induction and crosslingual word similarity show that our method is able to learn high quality bilingual embeddings from as little bilingual evidence as a 25 word dictionary or an automatically generated list of numerals, obtaining results that are competitive with state-of-the-art systems using much richer bilingual resources like larger dictionaries or parallel corpora. In spite of its simplicity, a more detailed analysis shows that our method is implicitly optimizing a meaningful objective function that is independent from any bilingual data which, with a better optimization method, might allow to learn bilingual word embeddings in a completely unsupervised manner. In the future, we would like to delve deeper into this direction and fine-tune our method so it can reliably learn high quality bilingual word embeddings without any bilingual evidence at all. In addition to that, we would like to explore non-linear transformations (Lu et al., 2015) and alternative dictionary induction methods (Dinu et al., 2015; Smith et al., 2017). Finally, we would like to apply our model in the decipherment scenario (Dou et al., 2015). Acknowledgements We thank the anonymous reviewers for their insightful comments and Flavio Merenda for his help with the error analysis. This research was partially supported by a Google Faculty Award, the Spanish MINECO (TUNER TIN2015-65308-C5-1-R, MUSTER PCIN-2015-226 and TADEEP TIN2015-70214-P, cofunded by EU FEDER), the Basque Government (MODELA KK-2016/00082) and the UPV/EHU (excellence research group). Mikel Artetxe enjoys a doctoral grant from the Spanish MECD. 459 References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 2289–2294. https://aclweb.org/anthology/D16-1250. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation 43(3):209–226. Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A framework for the construction of monolingual and cross-lingual word similarity datasets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, pages 1–7. http://www.aclweb.org/anthology/P15-2001. Hailong Cao, Tiejun Zhao, Shu Zhang, and Yao Meng. 2016. A distribution-based model to learn bilingual word embeddings. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 1818–1827. http://aclweb.org/anthology/C16-1171. Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In Advances in Neural Information Processing Systems 27, Curran Associates, Inc., pages 1853–1861. http://papers.nips.cc/paper/5270-anautoencoder-approach-to-learning-bilingual-wordrepresentations.pdf. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), workshop track. Qing Dou, Ashish Vaswani, Kevin Knight, and Chris Dyer. 2015. Unifying bayesian inference and vector space models for improved decipherment. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 836–845. http://www.aclweb.org/anthology/P15-1081. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Gothenburg, Sweden, pages 462– 471. http://www.aclweb.org/anthology/E14-1049. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast bilingual distributed representations without word alignments. In Proceedings of the 32nd International Conference on Machine Learning. pages 748–756. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING 2012. The COLING 2012 Organizing Committee, Mumbai, India, pages 1459–1474. http://www.aclweb.org/anthology/C12-1089. Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. 2015. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 270–280. http://www.aclweb.org/anthology/P15-1027. Ang Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Deep multilingual correlation for improved word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 250–256. http://www.aclweb.org/anthology/N15-1028. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing. Association for Computational Linguistics, Denver, Colorado, pages 151– 159. http://www.aclweb.org/anthology/W15-1521. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. 2016. Adversarial autoencoders. In Proceedings of the 4rd International Conference on Learning Representations (ICLR 2016), workshop track. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Baltimore, Maryland, pages 55–60. http://www.aclweb.org/anthology/P14-5010. 460 Antonio Valerio Miceli Barone. 2016. Towards crosslingual distributed representations without parallel text trained with adversarial autoencoders. In Proceedings of the 1st Workshop on Representation Learning for NLP. Association for Computational Linguistics, Berlin, Germany, pages 121–126. http://anthology.aclweb.org/W16-1614. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168 . Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, Curran Associates, Inc., pages 3111–3119. http://papers.nips.cc/paper/5021distributed-representations-of-words-and-phrasesand-their-compositionality.pdf. Aditya Mogadala and Achim Rettinger. 2016. Bilingual word embeddings from parallel and nonparallel corpora for cross-language text classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 692–702. http://www.aclweb.org/anthology/N16-1083. Yves Peirsman and Sebastian Pad´o. 2010. Crosslingual induction of selectional preferences with bilingual vector spaces. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Los Angeles, California, pages 921– 929. http://www.aclweb.org/anthology/N10-1135. Samuel L. Smith, David H.P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017), conference track. Anders Søgaard, ˇZeljko Agi´c, H´ector Mart´ınez Alonso, Barbara Plank, Bernd Bohnet, and Anders Johannsen. 2015. Inverted indexing for crosslingual NLP. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 1713– 1722. http://www.aclweb.org/anthology/P15-1165. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12). European Language Resources Association (ELRA), Istanbul, Turkey. Chen-Tse Tsai and Dan Roth. 2016. Cross-lingual wikification using multilingual embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 589–598. http://www.aclweb.org/anthology/N16-1072. Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embeddings: An empirical comparison. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1661–1670. http://www.aclweb.org/anthology/P16-1157. Ivan Vuli´c and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 247–257. http://www.aclweb.org/anthology/P16-1024. Ivan Vuli´c and Marie-Francine Moens. 2013. A study on bootstrapping bilingual vector spaces from nonparallel data (and nothing else). In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 1613–1624. http://www.aclweb.org/anthology/D131168. Ivan Vuli´c and Marie-Francine Moens. 2016. Bilingual distributed word representations from documentaligned comparable data. Journal of Artificial Intelligence Research 55(1):953–994. Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, Ann Arbor, Michigan, pages 119–129. http://www.aclweb.org/anthology/W14-1613. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 1006–1011. http://www.aclweb.org/anthology/N15-1104. Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten pairs to tag – multilingual pos tagging via coarse mapping between embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language 461 Technologies. Association for Computational Linguistics, San Diego, California, pages 1307–1317. http://www.aclweb.org/anthology/N16-1156. Kai Zhao, Hany Hassan, and Michael Auli. 2015. Learning translation models from monolingual continuous representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 1527– 1536. http://www.aclweb.org/anthology/N15-1176. Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 1393–1398. http://www.aclweb.org/anthology/D13-1141. 462
2017
42
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 463–472 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1043 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 463–472 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1043 Abstract Meaning Representation Parsing using LSTM Recurrent Neural Networks William R. Foland Jr. Department of Computer Science University of Colorado Boulder, CO 80309 [email protected] James H. Martin Department of Computer Science and Institute of Cognitive Science University of Colorado Boulder, CO 80309 [email protected] Abstract We present a system which parses sentences into Abstract Meaning Representations, improving state-of-the-art results for this task by more than 5%. AMR graphs represent semantic content using linguistic properties such as semantic roles, coreference, negation, and more. The AMR parser does not rely on a syntactic preparse, or heavily engineered features, and uses five recurrent neural networks as the key architectural components for inferring AMR graphs. 1 Introduction Semantic analysis is the process of extracting meaning from text, revealing key ideas such as ”who did what to whom, when, how, and where?”, and is considered to be one of the most complex tasks in natural language processing. Historically, an important consideration has been the definition of the output of the task - how can the concepts in a sentence be captured in a general, consistent and expressive manner that facilitates downstream semantic processing? Over the years many formalisms have been proposed as suitable target representations including variants of first order logic, semantic networks, and frame-based slot-filler notations. Such representations have found a place in many semantic applications but there is no clear consensus as to the best representation. However, with the rise of supervised machine learning techniques, a new requirement has come to the fore: the ability of human annotators to quickly and reliably generate semantic representations as training data. Abstract Meaning Representation (AMR) (Banarescu et al., 2012)1 was developed to provide 1http://amr.isi.edu/language.html a computationally useful and expressive representation that could be reliably generated by human annotators. Sentence meanings in AMR are represented in the form of graphs consisting of concepts (nodes) connected by labeled relations (edges). AMR graphs include a number of traditional NLP representations including named entities (Nadeau and Sekine, 2007), word senses (Banerjee and Pedersen, 2002), coreference relations, and predicate-argument structures (Kingsbury and Palmer, 2002; Palmer et al., 2005). More recent innovations include wikification of named entities and normalization of temporal expressions (Verhagen et al., 2010; Str¨otgen and Gertz, 2010). (2016) provides an insightful discussion of the relationship between AMR and other formal representations including first order logic. The process of creating AMR’s for sentences is called AMR Parsing and was first introduced in (Flanigan et al., 2014). A key factor driving the development of AMR systems has been the increasing availability of training resources in the form of corpora where each sentence is paired with a corresponding AMR representation 2. A consistent framework for evaluating AMR parsers was defined by the Semeval-2016 Meaning Representation Parsing Task3. Standard training, development and test splits for the AMR Annotation Release 1 corpus are provided, as well as an additional out-of-domain test dataset, for system comparisons. 4 Viewed as a structured prediction task, AMR parsing poses some difficult challenges not faced by other related language processing tasks including part of speech tagging, syntactic parsing or se2See amr.isi.edu for information on currently available resources 3http://alt.qcri.org/semeval2016/task8/# 4Available from LDC as LDC2015E86 DEFT Phase 2 AMR Annotation R1 dataset. 463 degree ARG0 ARG1 quant mod ARG1 plan-01 TOP country wiki: "France" name op1: france further country numerous nucleus ARG0 cooperate-01 name (a) An AMR graphical depiction of the meaning of the sentence France plans further nuclear cooperation with numerous countries . Concepts are represented as ovals, and relations are the directed connections between them. Predicate concepts are labelled with their PropBank sense, and semantic roles are indicated by ”Arg” relations. Non-Arg relations like name or mod are called ”Nargs” in this paper. Note the shaded section, which shows an example of a subgraph, containing related concepts and relations. In the example, the subgraph represents ”France” which includes the category country and a shortened link to the France wiki page. Feature Extraction Sentence Subgraph Relation Resolution AMR Subgraph Expansion and AMR Construction Args Nargs Attr NCat UofI Wikifier SG Hard Max Hard Max Subgraph Spans NER WikiCat[8] Word Features Pnargs Pargs relations category Pattr (b) General Architecture for the AMR Parser, which creates an AMR based on the words in a sentence. The 5 B-LSTM networks infer structures of the AMR. For example, the SG network infers subgraphs, which are mostly single concept, like ”plan-01” or ”further”, but can also be like the more complex shaded ”France” subgraph in the example. Other B-LSTM networks are used to infer predicate argument relations (Args), other relations (Nargs), attributes like ”TOP” (Attr) and name categories like ”country” for France (Ncat). Figure 1: An example Abstract Meaning Representation and the architecture of the AMR parser, which produces an AMR from a sentence. mantic role labeling. The prediction task in these settings can be cast as per-token labeling tasks (i.e. IOB tags) or as a sequence of discrete parser actions, as in transition-based (shift-reduce) approaches to dependency parsing. The first challenge is that AMR representations are by design abstracted away from their associated surface forms. AMR corpora pair sentences with their corresponding representations, without providing an explicit annotation, or alignment, that links the parts of the representation to their corresponding elements of the sentence. Not surprisingly, this complicates training, decoding and evaluation. The second challenge is the fact that, as noted earlier, the AMR parsing task is an amalgam of predicate identification and classification, entity recognition, co-reference, word sense disambiguation and semantic role labeling — each of which relies on the others for successful analysis. The architecture and system presented in the following sections is largely motivated by these two challenges. 2 Related Work 2.1 AMR Parsers Most current AMR parsers are constructed using some form of supervised machine learning that exploits existing AMR corpora. In general, these systems make use of features derived from various forms of syntactic analysis, ranging from partof-speech tagging to more complex dependency or phrase-structure analysis. Currently, most systems fall into two classes: (1) systems that incrementally transform a dependency parse into an AMR 464 graph using transition-based systems (Wang et al., 2015, 2016), and (2) graph-oriented approaches that use syntactic features to score edges between all concept pairs, and then use a maximum spanning connected subgraph (MSCG) algorithm to select edges that will constitute the graph (Flanigan et al., 2014; Werling et al., 2015). As expected, there are exceptions to these general approaches. The largely rule-based approach of (2015) converts logical forms from an existing semantic analyzer into AMR graphs. They demonstrate the ability to use their existing system to generate AMRs in German, French, Spanish and Japanese without the need for a native AMR corpus. (2015) proposes a synchronous hyperedge replacement grammar solution, (2015) uses syntaxbased machine translation techniques to create tree structures similar to AMR, while (2015) creates logical form representations of sentences and then converts these to AMR. An exception to the use of heavily engineered features is the deep learning approach of (2016), which, following (Collobert et al., 2011), relies on word embeddings and recurrent neural networks to generate AMR graphs. 2.2 Bidirectional LSTM Neural Networks Unlike relatively simple sequence processing tasks like part-of-speech tagging and NER, semantic analysis requires the ability to keep track of relevant information that may be arbitrarily far away from the words currently under consideration. Recurrent neural networks (RNNs) are a class of neural architecture that use a form of short-term memory in order to solve this semantic distance problem. Basic RNN systems have been enhanced with the use of special memory cell units, referred to as Long Short-Term Memory neural networks, or LSTM’s (Hochreiter and Schmidhuber, 1997). Such systems can effectively process information dispersed over hundreds of words (Schmidhuber et al., 2002; Gers et al., 2001). Bidirectional LSTMs (B-LSTM) networks are LSTMs that are connected so that both future and past sequence context can be examined. (2015), successfully used a bidirectional LSTM network for semantic role labelling. We use the LSTM cell as described in (Graves et al., 2013), configured in a B-LSTM shown in Figure 2, as the core network architecture in the system. Five B-LSTM Neural output Softmax and Concatenation x0 x1 xT output output ... ... ... ... P h(f) 0 h(f) 1 h(f) T h(r) T h(r) 0 h(r) 1 Figure 2: A general diagram of a B-LSTM network, showing the feature input vectors xi, the forward layer (f) and the reverse layer (r). The network generates vectors of log likelihoods which are converted to probability vectors and then joined together to form an array of probabilities. Networks comprise the parser. 3 Parser Overview Our parser5 will be explained using this example sentence: France plans further nuclear cooperation with numerous countries . A graphical depiction of an AMR for this sentence is shown in Figure 1a. Given an input sentence, the approach taken in our AMR parser is similar to (Flanigan et al., 2014) in that it consists of two subtasks: (1) discover the concepts (nodes and sub-graphs) present in the sentence, and (2) determine the relations (arcs) that connect the concepts (relations capture both traditional predicate-argument structures (ARGs), as well as additional modifier relations that capture notions including quantification, polarity, and cardinality.) Neither of these tasks is straightforward in the AMR context. Among the complications are the fact that individual words may contribute to more than one node (as in the case of France), parts of the graph may be “reentrant”, participating in relations with multiple concepts, and predicate-argument and modifier relations can be introduced by arbitrary parts of the input. At a high level, our system takes an input sentence in form of a vector of word embeddings 5source at https://github.com/BillFoland/daisyluAMR 465 and uses a series of recurrent neural networks to (1) discover the basic set of nodes and subgraphs that comprise the AMR, (2) discover the set of predicate-argument relations among those concepts, and (3) identifying any relevant modifier relations that are present. A high level block diagram of the parser is shown in Figure 1b. The parser extracts features from the sentence which are processed by a bidirectional LSTM network (B-LSTM) to create a set of AMR subgraphs, which contain one or two concepts as well as their internal relations to each other. Features based on the sentence and these subgraphs are then processed by a pair of B-LSTM networks to compute the probabilities of relations between all subgraphs. All subgraphs are then connected using an iterative, greedy algorithm to compute a single component graph, with all subgraphs connected by relations. Separately, another two B-LSTM networks compute attribute and name categories, which are then appended to the graph. Finally, the subgraphs are expanded into the most probable AMR concept and relation primitives to create the final AMR. 4 Detailed Parser Architecture 4.1 AMR Spans, Subgraphs, and Subgraph Decoding Mapping the words in a sentence to AMR concepts is a critical first step in the parsing process, and can influence the performance of all subsequent processing. Although the most common mapping is one word to one concept, a series of consecutive words, or span, can also be associated with an AMR concept. Likewise, a span of words can be mapped to a small connected subgraph, such as the single word span France which is mapped to a subgraph composed of two concepts connected by a name relation. (see the shaded section of Figure 1a). Training corpora provide sentences which are annotated by humans with AMR graphs, not necessarily including a reference span to subgraph mapping. An automatic AMR aligner can be used to predict relationships between words and gold AMR’s. We use the alignments produced by the aligner of (2014), along with the words and reference AMR graphs, to identify a subgraph type to associate with each span. Each word in the sentence is then associated with an IOBES subgraph type tag. We call the algorithm which defines span to subgraph mapping the Expert Span Identifier, and use it to train the SG Network. A convenient development detail stems from the fact that during the AMR creation process, the identified subgraphs must be expanded into individual concepts and relations. For example, the subgraph type ”Named”, along with the span France, must be expanded to create the concepts, relations, and attributes shown in Figure 1a. A Subgraph Expander algorithm implements this task, which is essentially the inverse of the Expert Span Identifier. The Expert Span Identifier and Subgraph Expander were developed by cascading the two in a test configuration as shown in Figure 3a. 4.2 Features All input features for the five networks correspond to the sequence of words in the input sentence, and are presented to the networks as indices into lookup tables. With the exception of pre-trained word embeddings, these lookup tables are randomly initialized prior to training and representations are created during the training process. 4.2.1 Word Embeddings The use of distributed word representations generated from large text corpora is pervasive in modern NLP. We start with 300 dimension GloVe representations (Pennington et al., 2014) trained on the 840 billion word common crawl (Smith et al., 2013). We added two binary dimensions: one for out of vocabulary words, and one for padding, resulting in vectors with a width of 302. These embeddings are mapped from the words in the sentence, and are then trained using back propagation just like other parameters in the network. 4.2.2 Wikifier The AMR standard was expanded to include the annotation of named entities with a canonical form, using Wikipedia as the standard (see France in Figure 1a). The wiki link associated with this ”wikification” is expressed using the :wiki attribute, which requires some kind of global external knowledge of the Wikipedia ontology. We use the University of Illinois Wikifier (Ratinov et al., 2011; Cheng and Roth, 2013) to identify the :link directly, and use the possible categories output from the wikifier as feature inputs to the NCat Network. 466 Expert Span Identifier Compare Subgraph Expander Sentence Alignment AMR Subgraph Accuracy Subgraph Spans (a) Expert System and Subgraph Expander Development. The alignment between the words in the sentence and elements of the AMR is provided by an automatic aligner. The expert system uses the sentence, reference AMR, and alignment to identify spans of words which are related to concepts within the AMR. These spans are also labelled with a subgraph type. A ”subgraph expander” uses the words and subgraph type to expand into AMR subgraphs. Expert Span Identifier AMR Alignment Sentence Expert Subgraph Spans Feature Extraction UofI Wikifier SG NER Word Features Predicted Subgraph Spans Backpropagation cross entropy (b) SG Network Training. The SG Network uses just the words in the sentence as input, and is trained to imitate the output of the Expert System. This output defines spans of words and their subgraph types, which are the nodes of the AMR graph. Later stages of the system use this information to infer other aspects of the AMR, like relations (edges). Figure 3: SG Model Development Details. Named Entity Recognition can be valuable input to a parser, and state-of-the-art NER systems can be created using convolutional neural networks (Collobert et al., 2011) or LSTM (Chiu and Nichols, 2015) aided by information from gazetteers. These gazetteers are large dictionaries containing well known named entities (e.g., (Florian et al., 2003)). Rather than add gazetteer features to our system, we make use of the NER information already calculated and provided by the Univ. of Illinois Wikifier. We then encode the classified named entities output from the wikifier as feature embeddings, which are used by the SG Network. 4.2.3 AMR Subgraph (SG) Network The features used as input to the SG network are: • word: 45Kx302, the word embeddings • suffix: 430x5, embeddings based on the final two letters of each word. • caps: 5x5, embeddings based on the capitalization pattern of the word. • NER: 5x5, embeddings indexed by NER from the Wikifier, ’O’, ’LOC’, ’ORG’, ’PER’ or ’MISC’. The SG Network produces probabilities for 46 BIOES tagged subgraph types, and the highest probability tag is chosen for each word, as shown for the example sentence in Table 1. 4.2.4 Predicate Argument Relations (Args) Network The AMR concepts (nodes) are connected by relations (arcs). We found it convenient to distinguish predicate argument relations, or ”Args” from other relations, which we call ”Nargs”. For example, see ARG0 and ARG1 relations in Figure 1a are ”Args”, compared with the name, degree, mod, or quant relations which are ”Nargs”. The Args Network is run once for each predicate subgraph, and produces a matrix Pargs which defines the probability (prior to the identification of any relations6) of a type of predicate argument relation from a predicate subgraph to any other SG identified subgraph. (For example, see ARG0 and ARG1 relations in Figure 1a.) The matrix has dimensions 5 by s, where 5 is the number of predicate arg relations identified by the network, and s is the total number of subgraphs identified by the SG Network for the sentence. The Args features, calculated for each source predicate subgraph, are: • Word, Suffix and Caps as in the SG network. • SG: 46x5, indexed by the SG network identified subgraph. • PredWords[5], 45Kx302: The word embeddings of the word and surrounding 2 words associated with the source predicate subgraph. 6relation probabilities change as hard decisions are made, see section 4.3 467 words BIOES Prob kind France S Named 0.995 Named subgraph plans S Pred-01 0.997 plan-01 further S NonPred 0.931 further nuclear S NonPred 0.990 nucleus cooperation S Pred-01 0.986 cooperate-01 with O 1.000 O numerous S NonPred 0.982 numerous countries S NonPred 0.860 country . O 0.999 O Table 1: SG Network Example Output feature width Word[france] 302 Suffix[ce] 5 Caps[firstUp] 5 SG[S Named] 10 Word[further] 302 Word[nuclear] 302 Word[cooperation] 302 Word[with] 302 Word[numerous] 302 SG[S NonPred] 10 SG[S NonPred] 10 SG[S Pred-01] 10 SG[O] 10 SG[S NonPred] 10 Distance[4] 5 Table 2: Args Network Features for the word France while evaluating outgoing args for the word cooperation, associated with predicate cooperate-01 • PredSG[5], 46x10: The SG embedding of the word and surrounding 2 words associated with the source predicate subgraph. • regionMark: 21x5, indexed by the distance in words between the word and the word associated with the source predicate subgraph. Table 2 shows an example feature set for one subgraph while evaluating a predicate subgraph. 4.2.5 Non-Predicate Relations (Nargs) Network The Nargs Network uses features similar to the Args network. It is run once for each subgraph, and produces a matrix Pnargs which defines the probability of a type of relation from a subgraph to any other subgraph, prior to the identification of any relations.7 The matrix has dimensions 43 by s, where 43 is the number of non-arg relations identified by the network, and s is the total number of subgraphs identified by the SG Network for the sentence. 4.2.6 Attributes (Attr) Network The Attr Network determines a primary attribute for each subgraph, if any.8 This network is simplified to detect only one attribute (there could be 7Degree, mod, or quant are examples of Narg relations in Figure 1a. 8(TOP: plan-01) and (op1: france) are attribute examples shown in Figure 1a. many) per subgraph, and only computes probabilities for the two most common attributes: TOP and polarity. Note that subgraph expansion also identifies many attributes, for example the words associated with named entities, or the normalized quantity and date representations. A known shortcoming of this network is that the TOP and polarity attributes are not mutually exclusive, but noting that the cooccurrence of the two does not occur in the training data, we chose to avoid adding a separate network to allow the prediction of both attributes for a single subgraph. 4.2.7 Named Category (NCat) Network The NCat Network uses features similar to the SG Network, along with the suggested categories (up to eight) from the Wikifier, and produces probabilities for each of 68 :instance roles, or categories, for named entities identified in the training set AMR’s. • Word, Suffix and Caps as in the SG network. • WikiCat[8]: 108 x 5, indexed by suggested categories from the Wikifier. 4.3 Relation Resolution The generated Pargs and Pnargs for each SG identified subgraph are processed to determine the most likely relation connections, using the constraints: 468 1. AMR’s are single component graphs without cycles. 2. AMR’s are simple directed graphs, a max of one relation between any two subgraphs is allowed. 3. Outgoing predicate relations are limited to one of each kind (i.e. can’t have two ARG0’s) We initialize a graph description with all the subgraphs identified by the SG network. Probabilities for all possible edges are represented in the Pargs and Pnargs matrices. The Subgraphs are connected to one another by applying a greedy algorithm, which repeatedly selects the most probable edge from the Pargs and Pnargs matrices and adds the edge to the graph description. After an edge is selected to be added to the graph, we adjust Pargs and Pnargs based on the constraints (hard decisions change the probabilities), and repeat adding edges until all remaining edge probabilities are below a threshold. (The optimum value of this threshold, 0.55, was found by experimenting with the development data set). From then on, only the most probable edges which span graph components are chosen, until the graph contains a single component. Expressed as a step by step procedure, we first define pconnect as the probability threshold at which to require graph component spanning, and we repeat the following, until any two subgraphs in the graph are connected by at least one path. 1. Select the most probable outgoing relation from any of the identified subgraph probability matrices. Denote this probability as pr. 2. If pr < pconnect, keep selecting most probable relations until a component spanning connection is found. 3. Add the selected relation to the graph. If a cycle is created, reverse the relation direction and label. 4. Eliminate impossible relations based on the constraints and re-normalize the affected Pargs and Pnargs matrices. 4.4 AMR Construction AMR Construction converts the connected subgraph AMR into the final AMR graph form, with proper concepts, relations, and root, as follows: 1. The TOP attribute occurs exactly once in each AMR, so the subgraph with highest TOP probability produced by the Attr network is identified. The AMR graph is adjusted so that it is rooted with the most probable TOP subgraph. After graph adjustment, new cycles are sometimes created, which are removed by using -of relation reversal. 2. The subgraphs identified by the SG network, which were considered to be single nodes during relation resolution, are expanded to basic AMR concepts and relations to form a concept/relation AMR graph representation, using the Subgraph Expander component developed as shown in Figure 3b. When a subgraph contains two concepts, the choice of connecting to parent or child within the subgraph is made based on training data statistics of each relation type (Arg or Narg) for each subgraph type. 3. Nationalities are normalized (e.g. French to France). 4. A very basic coreference resolution is performed by merging all concepts representing ”I” into a single concept. Coreference resolution was otherwise ignored due to development time constraints. 5 Experimental Setup Semantic graph comparison can be tricky because direct graph alignment fails in the presence of just a few miscompares. A practical graph comparison program called Smatch (Cai and Knight, 2013) is used to consistently evaluate AMR parsers. The smatch python script provides an F1 evaluation metric for whole-sentence semantic graph analysis by comparing sets of triples which describe portions of the graphs, and uses a hill climbing algorithm for efficiency. All networks, including SG, were trained using stochastic gradient descent (SGD) with a fixed learning rate. We tried sentence level loglikelihood, which trains a viterbi decoder, as a training objective, but found no improvement over word-level likelihood (cross entropy). After all LSTM and linear layers, we added dropout to minimize overfitting (Hinton et al., 2012) and batch normalization to reduce sensitivity to learning rates and initialization (Ioffe and Szegedy, 2015). For each of the five networks, we used the LDC2015E86 training split to train parameters, and periodically interrupted training to run the dev split (forward) in order to monitor performance. 469 The model parameters which resulted in best dev performance were saved as the final model. The test split was used as the ”in domain” data set to assess the fully assembled parser. The inferred AMR’s were then evaluated using the smatch program to produce an F1 score. An evaluation dataset was provided for Semeval 2016 task 8, which is significantly different from the LDC2015E86 split dataset. ((2016) describes the eval dataset as ”quite difficult to parse, particularly due to creative approaches to word representation in the web forum portion”). 6 Results We report the statistics for smatch results of the ”test” and ”eval” datasets for 12 trained systems in Table 3. The top five scores for Semeval 2016 task 8, representing the previous state-of-the-art, are shown for context. With a smatch score of between 0.651 and 0.654, and a mean of 0.652, our system improves the state-of-the-art AMR parser performance by between 5.07% and 5.55%, and by a mean of 5.22%. The best performing systems for in-domain (dev and test) data correlated well with the best ones for the out-of-domain (eval) data, although the scores for the eval dataset were lower overall. 6.1 Individual Network Results The word spans tagged by the SG network are used to determine the features for the other networks. In particular, every span identified as a predicate will trigger the system to evaluate the Args network in order to determine the probabilities of outgoing predicate ARG relations. Likewise, all spans identified as subgraphs (other than named subgraphs) will lead to a Nargs network evaluation to determine outgoing non-Arg relations. The SG network identifies predicates with 0.93 F1, named subgraphs with 0.91 F1, and all other subgraphs with 0.94 F1. The Args network identifies ARG0 and ARG1 relations with 0.73 F1, but identification of ARG2, ARG3, and ARG4 drops down to (0.53, 0.20, and 0.43). It is difficult for the system to generalize among these relation tags because they differ significantly between predicates. 7 Conclusion and Future Work We have shown that B-LSTM neural networks can be used as the basis for a graph based semantic parser. Our AMR parser effectively exploits the ability of B-LSTM networks to learn to selectively extract information from words separated by long distances in a sentence, and to build up higher level representations by rejecting or remembering important information during sequence processing. There are changes which could be made to eliminate all pre-processing and to further improve parser performance. Eliminating the need for syntactic pre-parsing is valuable since a syntactic parser takes up significant time and computational resources, and errors in the generated syntax will propagate into an AMR parser. Our approach avoids both of these problems, while generating high quality results. Wikification tasks are generally independent from parsing, but wiki links are a requirement for the latest AMR specification. Since our preferred wikifier application generates NER information, we used the generated NER tags as input to the SG network. But it would also be fairly easy to add gazetteer information to the network features in order to remove the need for NER preprocessing. Therefore, the wikification subtask is the only portion of the parser which requires any pre-processing at all. Incorporating wikification gazetteers as B-LSTM features might allow a performant, fully self contained parser to be created. Sense disambiguation is not a very generalizable task, senses other than 01 and 02 for different predicates may differ from each other in ways which are very difficult to discern. A better approach to disambiguation is to consider predicates separately, solving for a set of coefficients for each verb found in the training set. A general set of model parameters could then be used to handle unseen examples. Likewise, high level ARGs like ARG2 and ARG3 don’t generalize very well among different predicates, and ARG inference accuracy could be improved with predicatespecific network parameters for the most common cases. The alignment between concepts and words is not a reliable, direct mapping: some concepts cannot be grounded to words, some are ambiguous, and automatic aligners tend to have high error rates relative to human aligning judgements. Improvements in the quality of the alignment in training data would improve parsing results. 470 System Description Test F1 Eval (OOD) F1 Our Parser (summary of 12 trained systems) mean 0.707 0.652 min 0.706 0.651 max 0.709 0.654 RIGA (Barzdins and Gosko, 2016) 0.6720 0.6196 Brandeis/cemantix.org/RPI (Wang et al., 2016) 0.6670 0.6195 CU-NLP (Foland Jr and Martin, 2016) 0.6610 0.6060 ICL-HD (Brandt et al., 2016) 0.6200 0.6005 UCL+Sheffield (Goodman et al., 2016) 0.6370 0.5983 Table 3: Smatch F1 results for our parser and top 5 parsers from semeval 2016 task 8. References Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage ccg semantic parsing with amr. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStep´anek, Pavel Stran´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2012. Abstract meaning representation (amr) 1.0 specification. In Parsing on Freebase from Question-Answer Pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Seattle: ACL. pages 1533–1544. Satanjeev Banerjee and Ted Pedersen. 2002. An adapted lesk algorithm for word sense disambiguation using wordnet. In Computational linguistics and intelligent text processing, Springer, pages 136– 145. Guntis Barzdins and Didzis Gosko. 2016. Riga at semeval-2016 task 8: Impact of smatch extensions and character-level neural translation on amr parsing accuracy. arXiv preprint arXiv:1604.01278 . Johan Bos. 2016. Expressive power of abstract meaning representations. Computational Linguistics 42(3):527–535. Lauritz Brandt, David Grimm, Mengfei Zhou, and Yannick Versley. 2016. Icl-hd at semeval-2016 task 8: Meaning representation parsing-augmenting amr parsing with a preposition semantic role labeling neural network. Proceedings of SemEval pages 1160–1166. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In ACL (2). pages 748–752. X. Cheng and D. Roth. 2013. Relational inference for wikification. In EMNLP. http://cogcomp.cs.illinois.edu/papers/ChengRo13.pdf. Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. arXiv preprint arXiv:1511.08308 . Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research 12:2493–2537. Jeffrey Flanigan, Sam Thomson, Jaime G Carbonell, Chris Dyer, and Noah A Smith. 2014. A discriminative graph-based parser for the abstract meaning representation . Radu Florian, Abe Ittycheriah, Hongyan Jing, and Tong Zhang. 2003. Named entity recognition through classifier combination. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4. Association for Computational Linguistics, Stroudsburg, PA, USA, CONLL ’03, pages 168–171. https://doi.org/10.3115/1119176.1119201. William R Foland Jr and James H Martin. 2016. Cunlp at semeval-2016 task 8: Amr parsing using lstmbased recurrent neural networks. Proceedings of SemEval pages 1197–1201. Felix A Gers, Douglas Eck, and J¨urgen Schmidhuber. 2001. Applying lstm to time series predictable through time-window approaches. In Artificial Neural NetworksICANN 2001, Springer, pages 669–676. James Goodman, Andreas Vlachos, and Jason Naradowsky. 2016. Ucl+ sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an α-bound. Proceedings of SemEval pages 1167–1172. Alex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. 2013. Speech recognition with deep recurrent neural networks. CoRR abs/1303.5778. http://arxiv.org/abs/1303.5778. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR abs/1207.0580. http://arxiv.org/abs/1207.0580. 471 Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR abs/1502.03167. http://arxiv.org/abs/1502.03167. Paul Kingsbury and Martha Palmer. 2002. From treebank to propbank. In LREC. Citeseer. Jonathan May. 2016. Semeval-2016 task 8: Meaning representation parsing. Proceedings of SemEval pages 1063–1073. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisticae Investigationes 30(1):3–26. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics 31(1):71– 106. Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement grammar based approach for amr parsing. CoNLL 2015 page 32. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014) 12. Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning english strings with abstract meaning representation graphs. In EMNLP. pages 425–429. Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing english into abstract meaning representation using syntaxbased machine translation. Training 10:218–021. L. Ratinov, D. Roth, D. Downey, and M. Anderson. 2011. Local and global algorithms for disambiguation to wikipedia. In ACL. http://cogcomp.cs.illinois.edu/papers/RRDA11.pdf. J¨urgen Schmidhuber, F Gers, and Douglas Eck. 2002. Learning nonregular languages: A comparison of simple recurrent networks and lstm. Neural Computation 14(9):2039–2041. Jason R Smith, Herve Saint-Amand, Magdalena Plamada, Philipp Koehn, Chris Callison-Burch, and Adam Lopez. 2013. Dirt cheap web-scale parallel text from the common crawl. In ACL (1). pages 1374–1383. Jannik Str¨otgen and Michael Gertz. 2010. Heideltime: High quality rule-based extraction and normalization of temporal expressions. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, pages 321–324. Lucy Vanderwende, Arul Menezes, and Chris Quirk. 2015. An amr parser for english, french, german, spanish and japanese and a new amr-annotated corpus. In Proceedings of NAACL-HLT. pages 26–30. Marc Verhagen, Roser Sauri, Tommaso Caselli, and James Pustejovsky. 2010. Semeval-2010 task 13: Tempeval-2. In Proceedings of the 5th international workshop on semantic evaluation. Association for Computational Linguistics, pages 57–62. Chuan Wang, Sameer Pradhan, Nianwen Xue, Xiaoman Pan, and Heng Ji. 2016. Camr at semeval-2016 task 8: An extended transition-based amr parser. Proceedings of SemEval pages 1173–1178. Chuan Wang, Nianwen Xue, Sameer Pradhan, and Sameer Pradhan. 2015. A transition-based algorithm for amr parsing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 366–375. Keenon Werling, Gabor Angeli, and Christopher Manning. 2015. Robust subgraph generation improves abstract meaning representation parsing. arXiv preprint arXiv:1506.03139 . Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. 472
2017
43
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 473–483 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1044 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 473–483 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1044 Deep Semantic Role Labeling: What Works and What’s Next Luheng He†, Kenton Lee†, Mike Lewis‡, and Luke Zettlemoyer†∗ † Paul G. Allen School of Computer Science & Engineering, Univ. of Washington, Seattle, WA {luheng, kentonl, lsz}@cs.washington.edu ‡ Facebook AI Research, Menlo Park, CA [email protected] ∗Allen Institute for Artificial Intelligence, Seattle, WA [email protected] Abstract We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results. 1 Introduction Semantic role labeling (SRL) systems aim to recover the predicate-argument structure of a sentence, to determine essentially “who did what to whom”, “when”, and “where.” Recently breakthroughs involving end-to-end deep models for SRL without syntactic input (Zhou and Xu, 2015; Marcheggiani et al., 2017) seem to overturn the long-held belief that syntactic parsing is a prerequisite for this task (Punyakanok et al., 2008). In this paper, we show that this result can be pushed further using deep highway bidirectional LSTMs with constrained decoding, again significantly moving the state of the art (another 2 points on CoNLL 2005). We also present a careful empirical analysis to determine what works well and what might be done to progress even further. Our model combines a number of best practices in the recent deep learning literature. Following Zhou and Xu (2015), we treat SRL as a BIO tagging problem and use deep bidirectional LSTMs. However, we differ by (1) simplifying the input and output layers, (2) introducing highway connections (Srivastava et al., 2015; Zhang et al., 2016), (3) using recurrent dropout (Gal and Ghahramani, 2016), (4) decoding with BIOconstraints, and (5) ensembling with a product of experts. Our model gives a 10% relative error reduction over previous state of the art on the test sets of CoNLL 2005 and 2012. We also report performance with predicted predicates to encourage future exploration of end-to-end SRL systems. We present detailed error analyses to better understand the performance gains, including (1) design choices on architecture, initialization, and regularization that have a surprisingly large impact on model performance; (2) different types of prediction errors showing, e.g., that deep models excel at predicting long-distance dependencies but still struggle with known challenges such as PPattachment errors and adjunct-argument distinctions; (3) the role of syntax, showing that there is significant room for improvement given oracle syntax but errors from existing automatic parsers prevent effective use in SRL. In summary, our main contributions incluede: • A new state-of-the-art deep network for endto-end SRL, supported by publicly available code and models.1 • An in-depth error analysis indicating where the model works well and where it still struggles, including discussion of structural consistency and long-distance dependencies. • Experiments that point toward directions for future improvements, including a detailed discussion of how and when syntactic parsers could be used to improve these results. 1https://github.com/luheng/deep_srl 473 2 Model Two major factors contribute to the success of our deep SRL model: (1) applying recent advances in training deep recurrent neural networks such as highway connections (Srivastava et al., 2015) and RNN-dropouts (Gal and Ghahramani, 2016),2 and (2) using an A∗decoding algorithm (Lewis and Steedman, 2014; Lee et al., 2016) to enforce structural consistency at prediction time without adding more complexity to the training process. Formally, our task is to predict a sequence y given a sentence-predicate pair (w, v) as input. Each yi ∈y belongs to a discrete set of BIO tags T . Words outside argument spans have the tag O, and words at the beginning and inside of argument spans with role r have the tags Br and Ir respectively. Let n = |w| = |y| be the length of the sequence. Predicting an SRL structure under our model involves finding the highest-scoring tag sequence over the space of all possibilities Y: ˆy = argmax y∈Y f(w, y) (1) We use a deep bidirectional LSTM (BiLSTM) to learn a locally decomposed scoring function conditioned on the input: Pn t=1 log p(yt | w). To incorporate additional information (e.g., structural consistency, syntactic input), we augment the scoring function with penalization terms: f(w, y) = n X t=1 log p(yt | w) − X c∈C c(w, y1:t) (2) Each constraint function c applies a non-negative penalty given the input w and a length-t prefix y1:t. These constraints can be hard or soft depending on whether the penalties are finite. 2.1 Deep BiLSTM Model Our model computes the distribution over tags using stacked BiLSTMs, which we define as follows: il,t = σ(Wl i[hl,t+δl, xl,t] + bl i) (3) ol,t = σ(Wl o[hl,t+δl, xl,t] + bl o) (4) fl,t = σ(Wl f[hl,t+δl, xl,t] + bl f + 1) (5) ˜cl,t = tanh(Wl c[hl,t+δl, xl,t] + bl c) (6) cl,t = il,t ◦˜cl,t + fl,t ◦ct+δl (7) hl,t = ol,t ◦tanh(cl,t) (8) 2We thank Mingxuan Wang for suggesting highway connections with simplified inputs and outputs. Part of our model is extended from his unpublished implementation. + + + The 0 P(BARG0) + + + cats 0 P(IARG0) + + + love 1 P(BV) + + + hats 0 P(BARG1) Softmax Transform Gates LSTM Word & Predicate Figure 1: Highway LSTM with four layers. The curved connections represent highway connections, and the plus symbols represent transform gates that control inter-layer information flow. where xl,t is the input to the LSTM at layer l and timestep t. δl is either 1 or −1, indicating the directionality of the LSTM at layer l. To stack the LSTMs in an interleaving pattern, as proposed by Zhou and Xu (2015), the layerspecific inputs xl,t and directionality δl are arranged in the following manner: xl,t = ( [Wemb(wt), Wmask(t = v)] l = 1 hl−1,t l > 1 (9) δl = ( 1 if l is even −1 otherwise (10) The input vector x1,t is the concatenation of token wt’s word embedding and an embedding of the binary feature (t = v) indicating whether wt word is the given predicate. Finally, the locally normalized distribution over output tags is computed via a softmax layer: p(yt | x) ∝exp(Wy taghL,t + btag) (11) Highway Connections To alleviate the vanishing gradient problem when training deep BiLSTMs, we use gated highway connections (Zhang et al., 2016; Srivastava et al., 2015). We include transform gates rt to control the weight of linear and non-linear transformations between layers (See Figure 1). The output hl,t is changed to: rl,t = σ(Wl r[hl,t−1, xt] + bl r) (12) h′ l,t = ol,t ◦tanh(cl,t) (13) hl,t = rl,t ◦h′ l,t + (1 −rl,t) ◦Wl hxl,t (14) 474 Recurrent Dropout To reduce over-fitting, we use dropout as described in Gal and Ghahramani (2016). A shared dropout mask zl is applied to the hidden state: ehl,t = rl,t ◦h′ l,t + (1 −rl,t) ◦Wl hxl,t (15) hl,t = zl ◦ehl,t (16) zl is shared across timesteps at layer l to avoid amplifying the dropout noise along the sequence. 2.2 Constrained A∗Decoding The approach described so far does not model any dependencies between the output tags. To incorporate constraints on the output structure at decoding time, we use A∗search over tag prefixes for decoding. Starting with an empty sequence, the tag sequence is built from left to right. The score for a partial sequence with length t is defined as: f(w, y1:t) = t X i=1 log p(yi | w) − X c∈C c(w, y1:i) (17) An admissible A∗heuristic can be computed efficiently by summing over the best possible tags for all timesteps after t: g(w, y1:t) = n X i=t+1 max yi∈T log p(yi | w) (18) Exploration of the prefixes is determined by an agenda A which is sorted by f(w, y1:t) + g(w, y1:t). In the worst case, A∗explores exponentially many prefixes, but because the distribution p(yt | w) learned by the BiLSTM models is very peaked, the algorithm is efficient in practice. We list some example constraints as follows: BIO Constraints These constraints reject any sequence that does not produce valid BIO transitions, such as BARG0 followed by IARG1. SRL Constraints Punyakanok et al. (2008); T¨ackstr¨om et al. (2015) described a list of SRLspecific global constraints: • Unique core roles (U): Each core role (ARG0-ARG5, ARGA) should appear at most once for each predicate. • Continuation roles (C): A continuation role C-X can exist only when its base role X is realized before it. • Reference roles (R): A reference role R-X can exist only when its base role X is realized (not necessarily before R-X). We only enforce U and C constraints, since the R constraints are more commonly violated in gold data and enforcing them results in worse performance (see discussions in Section 4.3). Syntactic Constraints We can enforce consistency with a given parse tree by rejecting or penalizing arguments that are not constituents. In Section 4.4, we will discuss the motivation behind using syntactic constraints and experimental results using both predicted and gold syntax. 2.3 Predicate Detection While the CoNLL 2005 shared task assumes gold predicates as input (Carreras and M`arquez, 2005), this information is not available in many downstream applications. We propose a simple model for end-to-end SRL, where the system first predicts a set of predicate words v from the input sentence w. Then each predicate in v is used as an input to argument prediction. We independently predict whether each word in the sentence is a predicate, using a binary softmax over the outputs of a bidirectional LSTM trained to maximize the likelihood of the gold labels. 3 Experiments 3.1 Datasets We measure the performance of our SRL system on two PropBank-style, span-based SRL datasets: CoNLL-2005 (Carreras and M`arquez, 2005) and CoNLL-2012 (Pradhan et al., 2013)3. Both datasets provide gold predicates (their index in the sentence) as part of the input. Therefore, each provided predicate corresponds to one training/test tag sequence. We follow the traindevelopment-test split for both datasets and use the official evaluation script from the CoNLL 2005 shared task for evaluation on both datasets. 3.2 Model Setup Our network consists of 8 BiLSTM layers (4 forward LSTMs and 4 reversed LSTMs) with 300dimensional hidden units, and a softmax layer for predicting the output distribution. Initialization All the weight matrices in BiLSTMs are initialized with random orthonormal matrices as described in Saxe et al. (2013). 3We used the version of OntoNotes downloaded at: http://cemantix.org/data/ontonotes.html. 475 Development WSJ Test Brown Test Combined Method P R F1 Comp. P R F1 Comp. P R F1 Comp. F1 Ours (PoE) 83.1 82.4 82.7 64.1 85.0 84.3 84.6 66.5 74.9 72.4 73.6 46.5 83.2 Ours 81.6 81.6 81.6 62.3 83.1 83.0 83.1 64.3 72.9 71.4 72.1 44.8 81.6 Zhou 79.7 79.4 79.6 82.9 82.8 82.8 70.7 68.2 69.4 81.1 FitzGerald (Struct.,PoE) 81.2 76.7 78.9 55.1 82.5 78.2 80.3 57.3 74.5 70.0 72.2 41.3 T¨ackstr¨om (Struct.) 81.2 76.2 78.6 54.4 82.3 77.6 79.9 56.0 74.3 68.6 71.3 39.8 Toutanova (Ensemble) 78.6 58.7 81.9 78.8 80.3 60.1 68.8 40.8 Punyakanok (Ensemble) 80.1 74.8 77.4 50.7 82.3 76.8 79.4 53.8 73.4 62.9 67.8 32.3 77.9 Table 1: Experimental results on CoNLL 2005, in terms of precision (P), recall (R), F1 and percentage of completely correct predicates (Comp.). We report results of our best single and ensemble (PoE) model. The comparison models are Zhou and Xu (2015), FitzGerald et al. (2015), T¨ackstr¨om et al. (2015), Toutanova et al. (2008) and Punyakanok et al. (2008). Development Test Method P R F1 Comp. P R F1 Comp. Ours (PoE) 83.5 83.2 83.4 67.5 83.5 83.3 83.4 68.5 Ours 81.8 81.4 81.5 64.6 81.7 81.6 81.7 66.0 Zhou 81.1 81.3 FitzGerald (Struct.,PoE) 81.0 78.5 79.7 60.9 81.2 79.0 80.1 62.6 T¨ackstr¨om (Struct.) 80.5 77.8 79.1 60.1 80.6 78.2 79.4 61.8 Pradhan (revised) 78.5 76.6 77.5 55.8 Table 2: Experimental results on CoNLL 2012 in the same metrics as above. We compare our best single and ensemble (PoE) models against Zhou and Xu (2015), FitzGerald et al. (2015), T¨ackstr¨om et al. (2015) and Pradhan et al. (2013). All tokens are lower-cased and initialized with 100-dimensional GloVe embeddings pre-trained on 6B tokens (Pennington et al., 2014) and updated during training. Tokens that are not covered by GloVe are replaced with a randomly initialized UNK embedding. Training We use Adadelta (Zeiler, 2012) with ϵ = 1e−6 and ρ = 0.95 and mini-batches of size 80. We set RNN-dropout probability to 0.1 and clip gradients with norm larger than 1. All the models are trained for 500 epochs with early stopping based on development results. 4 Ensembling We use a product of experts (Hinton, 2002) to combine predictions of 5 models, each trained on 80% of the training corpus and validated on the remaining 20%. For the CoNLL 2012 corpus, we split the training data from each sub-genre into 5 folds, such that the training data will have similar genre distributions. Constrained Decoding We experimented with different types of constraints on the CoNLL 2005 4Training the full model on CoNLL 2005 takes about 5 days on a single Titan X Pascal GPU. and CoNLL 2012 development sets. Only the BIO hard constraints significantly improve over the ensemble model. Therefore, in our final results, we only use BIO hard constraints during decoding. 5 3.3 Results In Table 1 and 2, we compare our best single and ensemble model with previous work. Our ensemble (PoE) has an absolute improvement of 2.1 F1 on both CoNLL 2005 and CoNLL 2012 over the previous state of the art. Our single model also achieves more than a 0.4 improvement on both datasets. In comparison with the best reported results, our percentage of completely correct predicates improves by 5.9 points. While the continuing trend of improving SRL without syntax seems to suggest that neural end-to-end systems no longer needs parsers, our analysis in Section 4.4 will show that accurate syntactic information can improve these deep models. 5A∗search in this setting finds the optimal sequence for all sentences and is therefore equivalent to Viterbi decoding. 476 Predicate Detection End-to-end SRL (Single) End-to-end SRL (PoE) Dataset P R F1 P R F1 P R F1 ∆F1 CoNLL 2005 Dev. 97.4 97.4 97.4 80.3 80.4 80.3 81.8 81.2 81.5 -1.2 WSJ Test 94.5 98.5 96.4 80.2 82.3 81.2 82.0 83.4 82.7 -1.9 Brown Test 89.3 95.7 92.4 67.6 69.6 68.5 69.7 70.5 70.1 -3.5 CoNLL 2012 Dev. 88.7 90.6 89.7 74.9 76.2 75.5 76.5 77.8 77.2 -6.2 CoNLL 2012 Test 93.7 87.9 90.7 78.6 75.1 76.8 80.2 76.6 78.4 -5.0 Table 3: Predicate detection performance and end-to-end SRL results using predicted predicates. ∆F1 shows the absolute performance drop compared to our best ensemble model with gold predicates. 100 200 300 400 500 65 70 75 80 Num. epochs Dev. F1 % Our model No highway connections No dropout No orthogonal initialization Figure 2: Smoothed learning curve of various ablations. The combination of highway layers, orthonormal parameter initialization and recurrent dropout is crucial to achieving strong performance. The numbers shown here are without constrained decoding. 3.4 Ablations Figure 2 shows learning curves of our model ablations on the CoNLL 2005 development set. We ablate our full model by removing highway connections, RNN-dropout, and orthonormal initialization independently. Without dropout, the model overfits at around 300 epochs at 78 F1. Orthonormal parameter initialization is surprisingly important—without this, the model achieves only 65 F1 within the first 50 epochs. All 8 layer ablations suffer a loss of more than 1.7 in absolute F1 compared to the full model. 3.5 End-to-end SRL The network for predicate detection (Section 2.3) contains 2 BiLSTM layers with 100-dimensional hidden units, and is trained for 30 epochs. For end-to-end evaluation, all arguments predicted for the false positive predicates are counted as precision loss, and all arguments for the false negative predicates are considered as recall loss. Table 3 shows the predicate detection F1 as well as end-to-end SRL results with predicted predicates.6 On CoNLL 2005, the predicate detector achieved over 96 F1, and the final SRL results only drop 1.2-3.5 F1 compared to using the gold predicates. However, on CoNLL 2012, the predicate detector has only about 90 F1, and the final SRL results decrease by up to 6.2 F1. This is at least in part due to the fact that CoNLL 2012 contains some nominal and copula predicates (Weischedel et al., 2013), making predicate identification a more challenging problem. 4 Analysis To better understand our deep SRL model and its relation to previous work, we address the following questions with a suite of empirical analyses: • What is the model good at and what kinds of mistakes does it make? • How well do LSTMs model global structural consistency, despite conditionally independent tagging decisions? • Is our model implicitly learning syntax, and could explicitly modeling syntax still help? All the analysis in this section is done on the CoNLL 2005 development set with gold predicates, unless otherwise stated. We are also able to compare to previous systems whose model predictions are available online (Punyakanok et al., 2005; Pradhan et al., 2005).7 4.1 Error Types Breakdown Inspired by Kummerfeld et al. (2012), we define a set of oracle transformations that fix various prediction errors sequentially and observe the relative improvement after each operation (see Table 4). Figure 3 shows how our work compares to the pre6The frame identification numbers reported in Pradhan et al. (2013) are not comparable, due to errors in the original release of the data, as mentioned in T¨ackstr¨om et al. (2015). 7Model predictions of CoNLL 2005 systems: http:// www.cs.upc.edu/˜srlconll/st05/st05.html 477 Orig. Fix Labels Move Core Arg. Merge Spans Split Spans Fix Span Boundary Drop Arg. Add Arg. 75 80 85 90 95 100 F1 % Ours Pradhan Punyakanok Figure 3: Performance after doing each type of oracle transformation in sequence, compared to two strong non-neural baselines. The gap is closed after the Add Arg. transformation, showing how our approach is gaining from predicting more arguments than traditional systems. vious systems in terms of different types of mistakes. While our model makes a similar number of labeling errors to traditional syntax-based systems, it has far fewer missing arguments (perhaps due to parser errors making some arguments difficult to recover for syntax-based systems). Label Confusion As shown in Table 4, our system most commonly makes labeling errors, where the predicted span is an argument but the role was incorrectly labeled. Table 5 shows a confusion matrix for the most frequent labels. The model often confuses ARG2 with AM-DIR, AM-LOC and AM-MNR. These confusions can arise due to the use of ARG2 in many verb frames to represent semantic relations such as direction or location. For example, ARG2 in the frame move.01 is defined as Arg2-GOL: destination. 8 This type of argumentadjunct distinction is known to be difficult (Kingsbury et al., 2002), and it is not surprising that our neural model has many such failure cases. Attachment Mistakes A second common source of error is reflected by the Merge Spans transformation (10.6%) and the Split Spans transformation (14.7%). These errors are closely tied to prepositional phrase (PP) attachment errors, which are also known to be some of the biggest challenges for linguistic analysis (Kummerfeld et al., 2012). Figure 4 shows the distribution of syntactic span labels involved in an attachment mistake, where 62% of the syntactic spans are prepositional phrases. For example, in Sumitomo 8Source: Unified verb index: http://verbs. colorado.edu. Operation Description % Fix Labels Correct the span label if its boundary matches gold. 29.3 Move Arg. Move a unique core argument to its correct position. 4.5 Merge Spans Combine two predicted spans into a gold span if they are separated by at most one word. 10.6 Split Spans Split a predicted span into two gold spans that are separated by at most one word. 14.7 Fix Boundary Correct the boundary of a span if its label matches an overlapping gold span. 18.0 Drop Arg. Drop a predicted argument that does not overlap with any gold span. 7.4 Add Arg. Add a gold argument that does not overlap with any predicted span. 11.0 Table 4: Oracle transformations paired with the relative error reduction after each operation. All the operations are permitted only if they do not cause any overlapping arguments. pred. \ gold A0 A1 A2 A3 ADV DIR LOC MNR PNC TMP A0 55 11 13 4 0 0 0 0 0 A1 78 46 0 0 22 11 10 25 14 A2 11 23 48 15 56 33 41 25 0 A3 3 2 2 4 0 0 0 25 14 ADV 0 0 0 4 0 15 29 25 36 DIR 0 0 5 4 0 11 2 0 0 LOC 5 9 12 0 4 0 10 0 14 MNR 3 0 12 26 33 0 0 0 21 PNC 0 3 5 4 0 11 4 2 0 TMP 0 8 5 0 41 11 26 6 0 Table 5: Confusion matrix for labeling errors, showing the percentage of predicted labels for each gold label. We only count predicted arguments that match gold span boundaries. financed the acquisition from Sears, our model mistakenly labels the prepositional phrase from Sears as the ARG2 of financed, whereas it should instead attach to acquisition. 4.2 Long-range Dependencies To analyze the model’s ability to capture longrange dependencies, we compute the F1 of our model on arguments with various distances to the predicate. Figure 5 shows that performance tends to degrade, for all models, for arguments further from the predicate. Interestingly, the gap between shallow and deep models becomes much larger for the long-distance predicate-argument structures. The absolute gap between our 2 layer and 8 layer models is 3-4 F1 for arguments that are within 2 words to the predicate, and 5-6 F1 for arguments that are farther away from the predicate. Surpris478 PP VP NP SBAR ADVP Other 0 20 40 60 80 100 62 10 10 5 4 9 % of labels Figure 4: For cases where our model either splits a gold span into two (Z →XY ) or merges two gold constituents (XY →Z), we show the distribution of syntactic labels for the Y span. Results show the major cause of these errors is inaccurate prepositional phrase attachment. 0 1-2 3-6 7-max 55 60 65 70 75 80 85 Distance (num. words in between) F1 % L8 L6 L4 L2 Punyakanok Pradhan Figure 5: F1 by surface distance between predicates and arguments. Performance degrades least rapidly on long-range arguments for the deeper neural models. ingly, the neural model performance deteriorates less severely on long-range dependencies than traditional syntax-based models. 4.3 Structural Consistency We can quantify two types of structural consistencies: the BIO constraints and the SRL-specific constraints. Via our ablation study, we show that deeper BiLSTMs are better at enforcing these structural consistencies, although not perfectly. BIO Violations The BIO format requires argument spans to begin with a B tag. Any I tag directly following an O tag or a tag with different label is considered a violation. Table 6 shows the number of BIO violations per token for BiLSTMs with different depths. The number of BIO violations decreases when we use a deeper model. The gap is biggest between 2-layer and 4-layer models, and diminishes after that. It is surprising that although the deeper models generate impressively accurate token-level predicHousing starts are expected to quicken a bit from August’s pace ARG0 ARG1 ARG2 ARG2 ARG1 V ARG2 ARG3 ARG2 ARG0 V V Gold Pred. +SRL Figure 6: Example where performance is hurt by enforcing the constraint that core roles may only occur once (+SRL). tions, they still make enough BIO errors to significantly hurt performance—when these constraints are simple enough to be enforced by trivial rules. We compare the average entropy between tokens involved in BIO violations with the averaged entropy of all tokens. For the 8-layer model, the average entropy on these tokens is 30 times higher than the averaged entropy on all tokens. This suggests that BIO inconsistencies occur when there is some ambiguity. For example, if the model is unsure whether two consecutive words should belong to an ARG0 or ARG1, it might generate inconsistent BIO sequences such as BARG0, IARG1 when decoding at the token level. Using BIO-constrained decoding can resolve this ambiguity and result in a structurally consistent solution. SRL Structure Violations The model predictions can also violate the SRL-specific constraints commonly used in prior work (Punyakanok et al., 2008; T¨ackstr¨om et al., 2015). As shown in Table 7, the model occasionally violates these SRL constraints. With our constrained decoding algorithm, it is straightforward to enforce the unique core roles (U) and continuation roles (C) constraints during decoding. The constrained decoding results are shown with the model named L8+PoE+SRL in Table 7. Although the violations are eliminated, the performance does not significantly improve. This is mainly due to two factors: (1) the model often already satisfies these constraints on its own, so the number of violations to be fixed are relatively small, and (2) the gold SRL structure sometimes violates the constraints and enforcing hard constraints can hurt performance. Figure 6 shows a sentence in the CoNLL 2005 development set. Our original model produces two ARG2s for the predicate quicken, and this violates the SRL constraints. When the A∗decoder fixes this violation, it changes the first ARG1 into ARG2 because ARG0, ARG1, ARG2 is a more frequent pattern in the training data and has higher overall score. 479 Accuracy Violations Avg. Entropy Model (no BIO) F1 Token BIO All BIO L8+PoE 81.5 91.5 0.07 0.02 0.72 L8 80.5 90.9 0.07 0.02 0.73 L6 80.1 90.3 0.06 0.02 0.72 L4 79.1 90.2 0.08 0.02 0.70 L2 74.6 88.4 0.18 0.03 0.66 Table 6: Comparison of BiLSTM models without BIO decoding. We compare F1 and token-level accuracy (Token), averaged BIO violations per token (BIO), overall model entropy (All) model entropy at tokens involved in BIO violations (BIO). Increasing the depth of the model beyond 4 does not produce more structurally consistent output, emphasizing the need for constrained decoding. 4.4 Can Syntax Still Help SRL? The Propbank-style SRL formalism is closely tied to syntax (Bonial et al., 2010; Weischedel et al., 2013). In Table 7, we show that 98.7% of the gold SRL arguments match an unlabeled constituent in the gold syntax tree. Similar to some recent work (Zhou and Xu, 2015), our model achieves strong performance without directly modeling syntax. A natural question follows: are neural SRL models implicitly learning syntax? Table 7 shows the trend of deeper models making predictions that are more consistent with the gold syntax in terms of span boundaries. With our best model (L8+PoE), 94.3% of the predicted arguments spans are part of the gold parse tree. This consistency is on par with previous CoNLL 2005 systems that directly model constituency and use predicted parse trees as features (Punyakanok, 95.3% and Pradhan, 93.0%). Constrained Decoding with Syntax The above analysis raises a further question: would improving consistency with syntax provide improvements for SRL? Our constrained decoding algorithm described in Section 2.2 enables us to inject syntax as a decoding constraint without having to retrain the model. Specifically, if the decoded sequence contains k arguments that do not match any unlabeled syntactic constituent, it will receive a penalty of kC, where C is a single parameter dictating how much the model should trust the provided syntax. In Figure 7, we compare the SRL accuracy with syntactic constraints specified by gold parse or automatic parses. When using gold syntax, the predictions improve up to 2 F1 as the penalty increases. A state-of-the-art parser (Choe SRL-Violations Model or Oracle F1 Syn % U C R Gold 100.0 98.7 24 0 61 L8+PoE 82.7 94.3 37 3 68 L8 81.6 94.0 48 4 73 L6 81.4 93.7 39 3 85 L4 80.5 93.2 51 3 84 L2 77.2 91.3 96 5 72 L8+PoE+SRL 82.8 94.2 5 1 68 L8+PoE+AutoSyn 83.2 96.1 113 3 68 L8+PoE+GoldSyn 85.0 97.6 102 3 68 Punyakanok 77.4 95.3 0 0 0 Pradhan 78.3 93.0 84 3 58 Table 7: Comparison of models with different depths and decoding constraints (in addition to BIO) as well as two previous systems. We compare F1, unlabeled agreement with gold constituency (Syn%) and each type of SRL-constraint violations (Unique core roles, Continuation roles and Reference roles). Our best model produces a similar number of constraint violations to the gold annotation, explaining why deterministically enforcing these constraints is not helpful. and Charniak, 2016) provides smaller gains, while using the Charniak parser (Charniak, 2000) hurts performance if the model places too much trust in it. These results suggest that high-quality syntax can still make a large impact on SRL. A known challenge for syntactic parsers is robustness on out-of-domain data. Therefore we provide experimental results in Table 8 for both CoNLL 2005 and CoNLL 2012, which consists of 8 different genres. The penalties are tuned on the two development sets separately (C = 10000 on CoNLL 2005 and C = 20 on CoNLL 2012). On the CoNLL 2005 development set, the predicted syntax gives a 0.5 F1 improvement over our best model, while on in-domain test and outof-domain development sets, the improvement is much smaller. As expected, on CoNLL 2012, syntax improves most on the newswire (NW) domain. These improvements suggest that while decoding with hard constraints is beneficial, joint training or multi-task learning could be even more effective by leveraging full, labeled syntactic structures. 5 Related Work Traditional approaches to semantic role labeling have used syntactic parsers to identify constituents and model long-range dependencies, and enforced 480 0 1 10 100 1000 10000 ∞ 82 83 84 85 Penalty C F1 % Gold Choe Charniak Figure 7: Performance of syntax-constrained decoding as the non-constituent penalty increases for syntax from two parsers (from Choe and Charniak (2016) and Charniak (2000)) and gold syntax. The best existing parser gives a small improvement, but the improvement from gold syntax shows that there is still potential for syntax to help SRL. CoNLL-05 CoNLL-2012 Dev. Dev. Test BC BN NW MZ PT TC WB L8+PoE 82.7 84.6 81.4 82.8 82.8 80.4 93.6 84.8 81.0 +AutoSyn 83.2 84.8 81.5 82.8 83.2 80.6 93.7 84.9 81.1 Table 8: F1 on CoNLL 2005, and the development set of CoNLL 2012, broken down by genres. Syntax-constrained decoding (+AutoSyn) shows bigger improvement on in-domain data (CoNLL 05 and CoNLL 2012 NW). global consistency using integer linear programming (Punyakanok et al., 2008) or dynamic programs (T¨ackstr¨om et al., 2015). More recently, neural methods have been employed on top of syntactic features (FitzGerald et al., 2015; Roth and Lapata, 2016). Our experiments show that offthe-shelf neural methods have a remarkable ability to learn long-range dependencies, syntactic constituency structure, and global constraints without coding task-specific mechanisms for doing so. An alternative line of work has attempted to reduce the dependency on syntactic input for semantic role labeling models. Collobert et al. (2011) first introduced an end-to-end neural-based approach with sequence-level training and uses a convolutional neural network to model the context window. However, their best system fell short of traditional feature-based systems. Neural methods have also been used as classifiers in transition-based SRL systems (Henderson et al., 2013; Swayamdipta et al., 2016). Most recently, several successful LSTM-based architectures have achieved state-of-the-art results in English span-based SRL (Zhou and Xu, 2015), Chinese SRL (Wang et al., 2015), and dependencybased SRL (Marcheggiani et al., 2017) with little to no syntactic input. Our techniques push results to more than 3 F1 over the best syntax-based models. However, we also show that there is potential for syntax to further improve performance. 6 Conclusion and Future Work We presented a new deep learning model for spanbased semantic role labeling with a 10% relative error reduction over the previous state of the art. Our ensemble of 8 layer BiLSTMs incorporated some of the recent best practices such as orthonormal initialization, RNN-dropout, and highway connections, and we have shown that they are crucial for getting good results with deep models. Extensive error analysis sheds light on the strengths and limitations of our deep SRL model, with detailed comparison against shallower models and two strong non-neural systems. While our deep model is better at recovering longdistance predicate-argument relations, we still observe structural inconsistencies, which can be alleviated by constrained decoding. Finally, we posed the question of whether deep SRL still needs syntactic supervision. Despite recent success without syntactic input, we found that our best neural model can still benefit from accurate syntactic parser output via straightforward constrained decoding. In our oracle experiment, we observed a 3 F1 improvement by leveraging gold syntax, showing the potential for high quality parsers to further improve deep SRL models. Acknowledgments The research was supported in part by DARPA under the DEFT program (FA8750-13-2-0019), the ARO (W911NF-16-1-0121), the NSF (IIS1252835, IIS-1562364), gifts from Google and Tencent, and an Allen Distinguished Investigator Award. We are grateful to Mingxuan Wang for sharing his highway LSTM implementation and Sameer Pradhan for help with the CoNLL 2012 dataset. We thank Nicholas FitzGerald, Dan Garrette, Julian Michael, Hao Peng, and Swabha Swayamdipta for helpful comments, and the anonymous reviewers for valuable feedback. 481 References Claire Bonial, Olga Babko-Malaya, Jinho D Choi, Jena Hwang, and Martha Palmer. 2010. Propbank annotation guidelines. Center for Computational Language and Education Research Institute of Cognitive Science University of Colorado at Boulder . Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the conll-2005 shared task: Semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning. Association for Computational Linguistics, pages 152–164. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proc. of the First North American chapter of the Association for Computational Linguistics conference (NAACL). Association for Computational Linguistics, pages 132–139. Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proc. of the 2016 Conference of Empirical Methods in Natural Language Processing (EMNLP). Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493–2537. Nicholas FitzGerald, Oscar T¨ackstr¨om, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role labeling with neural network factors. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 960–970. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems. pages 1019–1027. James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model. Computational Linguistics 39(4):949–998. Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural computation 14(8):1771–1800. Paul Kingsbury, Martha Palmer, and Mitch Marcus. 2002. Adding semantic annotation to the penn treebank. In Proceedings of the human language technology conference. pages 252–256. Jonathan K. Kummerfeld, David Hall, James R. Curran, and Dan Klein. 2012. Parser showdown at the wall street corral: An empirical investigation of error types in parser output. In Proc. of the 2012 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1048–1059. Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2016. Global neural ccg parsing with optimality guarantees. In Proc. of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Mike Lewis and Mark Steedman. 2014. A* ccg parsing with a supertag-factored model. In Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 990–1000. Diego Marcheggiani, Anton Frolov, and Ivan Titov. 2017. A simple and accurate syntax-agnostic neural model for dependency-based semantic role labeling. arXiv preprint arXiv:1701.02593 . Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1532–1543. Sameer Pradhan, Kadri Hacioglu, Wayne Ward, James H Martin, and Daniel Jurafsky. 2005. Semantic role chunking combining complementary syntactic views. In Proc. of the 2005 Conference on Computational Natural Language Learning (CoNLL). pages 217–220. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In Proc. of the 2013 Conference on Computational Natural Language Learning (CoNLL). pages 143–152. Vasin Punyakanok, Peter Koomen, Dan Roth, and Wen-tau Yih. 2005. Generalized inference with multiple semantic role labeling systems. In Proc. of the 2005 Conference on Computational Natural Language Learning (CoNLL). Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics 34(2):257–287. Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL). Andrew M Saxe, James L McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 . Rupesh K Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In Advances in neural information processing systems. pages 2377–2385. Swabha Swayamdipta, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Greedy, joint syntacticsemantic parsing with stack lstms. In Proc. of the 2016 Conference on Computational Natural Language Learning (CoNLL). page 187. 482 Oscar T¨ackstr¨om, Kuzman Ganchev, and Dipanjan Das. 2015. Efficient inference and structured learning for semantic role labeling. Transactions of the Association for Computational Linguistics 3:29–41. Kristina Toutanova, Aria Haghighi, and Christopher D Manning. 2008. A global joint model for semantic role labeling. Computational Linguistics 34(2):161– 191. Zhen Wang, Tingsong Jiang, Baobao Chang, and Zhifang Sui. 2015. Chinese semantic role labeling with bidirectional recurrent neural networks. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1626– 1631. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia, PA . Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yaco, Sanjeev Khudanpur, and James Glass. 2016. Highway long short-term memory rnns for distant speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pages 5755–5759. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL). 483
2017
44
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 484–495 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1045 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 484–495 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1045 Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access Bhuwan Dhingra⋆ Lihong Li† Xiujun Li† Jianfeng Gao† Yun-Nung Chen‡ Faisal Ahmed† Li Deng† ⋆Carnegie Mellon University, Pittsburgh, PA, USA †Microsoft Research, Redmond, WA, USA ‡National Taiwan University, Taipei, Taiwan ⋆[email protected] †{lihongli,xiul,jfgao}@microsoft.com ‡[email protected] Abstract This paper proposes KB-InfoBot1 — a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries. Such goal-oriented dialogue agents typically need to interact with an external database to access real-world knowledge. Previous systems achieved this by issuing a symbolic query to the KB to retrieve entries based on their attributes. However, such symbolic operations break the differentiability of the system and prevent endto-end training of neural dialogue agents. In this paper, we address this limitation by replacing symbolic queries with an induced “soft” posterior distribution over the KB that indicates which entities the user is interested in. Integrating the soft retrieval process with a reinforcement learner leads to higher task success rate and reward in both simulations and against real users. We also present a fully neural end-to-end agent, trained entirely from user feedback, and discuss its application towards personalized dialogue agents. 1 Introduction The design of intelligent assistants which interact with users in natural language ranks high on the agenda of current NLP research. With an increasing focus on the use of statistical and machine learning based approaches (Young et al., 2013), the last few years have seen some truly remarkable conversational agents appear on the market (e.g. Apple Siri, Microsoft Cortana, Google Allo). These agents can perform simple tasks, answer 1The source code is available at: https://github. com/MiuLab/KB-InfoBot factual questions, and sometimes also aimlessly chit-chat with the user, but they still lag far behind a human assistant in terms of both the variety and complexity of tasks they can perform. In particular, they lack the ability to learn from interactions with a user in order to improve and adapt with time. Recently, Reinforcement Learning (RL) has been explored to leverage user interactions to adapt various dialogue agents designed, respectively, for task completion (Gaˇsi´c et al., 2013), information access (Wen et al., 2016b), and chitchat (Li et al., 2016a). We focus on KB-InfoBots, a particular type of dialogue agent that helps users navigate a Knowledge Base (KB) in search of an entity, as illustrated by the example in Figure 1. Such agents must necessarily query databases in order to retrieve the requested information. This is usually done by performing semantic parsing on the input to construct a symbolic query representing the beliefs of the agent about the user goal, such as Wen et al. (2016b), Williams and Zweig (2016), and Li et al. (2017)’s work. We call such an operation a Hard-KB lookup. While natural, this approach has two drawbacks: (1) the retrieved results do not carry any information about uncertainty in semantic parsing, and (2) the retrieval operation is non differentiable, and hence the parser and dialog policy are trained separately. This makes online endto-end learning from user feedback difficult once the system is deployed. In this work, we propose a probabilistic framework for computing the posterior distribution of the user target over a knowledge base, which we term a Soft-KB lookup. This distribution is constructed from the agent’s belief about the attributes of the entity being searched for. The dialogue policy network, which decides the next system action, receives as input this full distribution instead of a handful of retrieved results. We show in our ex484 Movie=? Actor=Bill Murray Release Year=1993 Find me the Bill Murray’s movie. I think it came out in 1993. When was it released? Groundhog Day is a Bill Murray movie which came out in 1993. KB-InfoBot User Entity-Centric Knowledge Base Movie Actor Release Year Groundhog Day Bill Murray 1993 Australia Nicole Kidman X Mad Max: Fury Road X 2015 Figure 1: An interaction between a user looking for a movie and the KB-InfoBot. An entity-centric knowledge base is shown above the KB-InfoBot (missing values denoted by X). periments that this framework allows the agent to achieve a higher task success rate in fewer dialogue turns. Further, the retrieval process is differentiable, allowing us to construct an end-to-end trainable KB-InfoBot, all of whose components are updated online using RL. Reinforcement learners typically require an environment to interact with, and hence static dialogue corpora cannot be used for their training. Running experiments on human subjects, on the other hand, is unfortunately too expensive. A common workaround in the dialogue community (Young et al., 2013; Schatzmann et al., 2007b; Scheffler and Young, 2002) is to instead use user simulators which mimic the behavior of real users in a consistent manner. For training KB-InfoBot, we adapt the publicly available2 simulator described in Li et al. (2016b). Evaluation of dialogue agents has been the subject of much research (Walker et al., 1997; M¨oller et al., 2006). While the metrics for evaluating an InfoBot are relatively clear — the agent should return the correct entity in a minimum number of turns — the environment for testing it not so much. Unlike previous KB-based QA systems, our focus is on multi-turn interactions, and as such there are no publicly available benchmarks for this problem. We evaluate several versions of KB-InfoBot with the simulator and on real users, and show that the proposed Soft-KB lookup helps the reinforcement learner discover better dialogue policies. Initial experiments on the end-to-end agent also demonstrate its strong learning capability. 2https://github.com/MiuLab/TC-Bot 2 Related Work Our work is motivated by the neural GenQA (Yin et al., 2016a) and neural enquirer (Yin et al., 2016b) models for querying KBs via natural language in a fully “neuralized” way. However, the key difference is that these systems assume that users can compose a complicated, compositional natural language query that can uniquely identify the element/answer in the KB. The research task is to parse the query, i.e., turning the natural language query into a sequence of SQL-like operations. Instead we focus on how to query a KB interactively without composing such complicated queries in the first place. Our work is motivated by the observations that (1) users are more used to issuing simple queries of length less than 5 words (Spink et al., 2001); (2) in many cases, it is unreasonable to assume that users can construct compositional queries without prior knowledge of the structure of the KB to be queried. Also related is the growing body of literature focused on building end-to-end dialogue systems, which combine feature extraction and policy optimization using deep neural networks. Wen et al. (2016b) introduced a modular neural dialogue agent, which uses a Hard-KB lookup, thus breaking the differentiability of the whole system. As a result, training of various components of the dialogue system is performed separately. The intent network and belief trackers are trained using supervised labels specifically collected for them; while the policy network and generation network are trained separately on the system utterances. We retain modularity of the network by keeping the belief trackers separate, but replace the hard lookup with a differentiable one. Dialogue agents can also interface with the database by augmenting their output action space with predefined API calls (Williams and Zweig, 2016; Zhao and Eskenazi, 2016; Bordes and Weston, 2016; Li et al., 2017). The API calls modify a query hypothesis maintained outside the end-toend system which is used to retrieve results from this KB. This framework does not deal with uncertainty in language understanding since the query hypothesis can only hold one slot-value at a time. Our approach, on the other hand, directly models the uncertainty to construct the posterior over the KB. Wu et al. (2015) presented an entropy minimization dialogue management strategy for In485 foBots. The agent always asks for the value of the slot with maximum entropy over the remaining entries in the database, which is optimal in the absence of language understanding errors, and serves as a baseline against our approach. Reinforcement learning neural turing machines (RLNTM) (Zaremba and Sutskever, 2015) also allow neural controllers to interact with discrete external interfaces. The interface considered in that work is a one-dimensional memory tape, while in our work it is an entity-centric KB. 3 Probabilistic KB Lookup This section describes a probabilistic framework for querying a KB given the agent’s beliefs over the fields in the KB. 3.1 Entity-Centric Knowledge Base (EC-KB) A Knowledge Base consists of triples of the form (h, r, t), which denotes that relation r holds between the head h and tail t. We assume that the KB-InfoBot has access to a domain-specific entity-centric knowledge base (EC-KB) (Zwicklbauer et al., 2013) where all head entities are of a particular type (such as movies or persons), and the relations correspond to attributes of these head entities. Such a KB can be converted to a table format whose rows correspond to the unique head entities, columns correspond to the unique relation types (slots henceforth), and some entries may be missing. An example is shown in Figure 1. 3.2 Notations and Assumptions Let T denote the KB table described above and Ti,j denote the jth slot-value of the ith entity. 1 ≤i ≤N and 1 ≤j ≤M. We let V j denote the vocabulary of each slot, i.e. the set of all distinct values in the j-th column. We denote missing values from the table with a special token and write Ti,j = Ψ. Mj = {i : Ti,j = Ψ} denotes the set of entities for which the value of slot j is missing. Note that the user may still know the actual value of Ti,j, and we assume this lies in V j. We do not deal with new entities or relations at test time. We assume a uniform prior G ∼U[{1, ...N}] over the rows in the table T , and let binary random variables Φj ∈{0, 1} indicate whether the user knows the value of slot j or not. The agent maintains M multinomial distributions pt j(v) for v ∈V j denoting the probability at turn t that the user constraint for slot j is v, given their utterances U t 1 till that turn. The agent also maintains M binomials qt j = Pr(Φj = 1) which denote the probability that the user knows the value of slot j. We assume that column values are independently distributed to each other. This is a strong assumption but it allows us to model the user goal for each slot independently, as opposed to modeling the user goal over KB entities directly. Typically maxj |V j| < N and hence this assumption reduces the number of parameters in the model. 3.3 Soft-KB Lookup Let pt T (i) = Pr(G = i|U t 1) be the posterior probability that the user is interested in row i of the table, given the utterances up to turn t. We assume all probabilities are conditioned on user inputs U t 1 and drop it from the notation below. From our assumption of independence of slot values, we can write pt T (i) ∝QM j=1 Pr(Gj = i), where Pr(Gj = i) denotes the posterior probability of user goal for slot j pointing to Ti,j. Marginalizing this over Φj gives: Pr(Gj = i) = 1 X φ=0 Pr(Gj = i, Φj = φ) (1) = qt j Pr(Gj = i|Φj = 1)+ (1 −qt j) Pr(Gj = i|Φj = 0). For Φj = 0, the user does not know the value of the slot, and from the prior: Pr(Gj = i|Φj = 0) = 1 N , 1 ≤i ≤N (2) For Φj = 1, the user knows the value of slot j, but this may be missing from T , and we again have two cases: Pr(Gj = i|Φj = 1) = ( 1 N , i ∈Mj pt j(v) Nj(v) 1 − |Mj| N  , i ̸∈Mj (3) Here, Nj(v) is the count of value v in slot j. Detailed derivation for (3) is provided in Appendix A. Combining (1), (2), and (3) gives us the procedure for computing the posterior over KB entities. 4 Towards an End-to-End-KB-InfoBot We claim that the Soft-KB lookup method has two benefits over the Hard-KB method – (1) it helps the agent discover better dialogue policies by providing it more information from the language understanding unit, (2) it allows end-to-end training of both dialogue policy and language understanding in an online setting. In this section we describe several agents to test these claims. 486 Belief Trackers Policy Network Beliefs Summary Soft-KB Lookup KB-InfoBot User User Utterance System Action Figure 2: High-level overview of the end-to-end KB-InfoBot. Components with trainable parameters are highlighted in gray. 4.1 Overview Figure 2 shows an overview of the components of the KB-InfoBot. At each turn, the agent receives a natural language utterance ut as input, and selects an action at as output. The action space, denoted by A, consists of M +1 actions — request(slot=i) for 1 ≤i ≤M will ask the user for the value of slot i, and inform(I) will inform the user with an ordered list of results I from the KB. The dialogue ends once the agent chooses inform. We adopt a modular approach, typical to goaloriented dialogue systems (Wen et al., 2016b), consisting of: a belief tracker module for identifying user intents, extracting associated slots, and tracking the dialogue state (Yao et al., 2014; Hakkani-T¨ur et al., 2016; Chen et al., 2016b; Henderson et al., 2014; Henderson, 2015); an interface with the database to query for relevant results (Soft-KB lookup); a summary module to summarize the state into a vector; a dialogue policy which selects the next system action based on current state (Young et al., 2013). We assume the agent only responds with dialogue acts. A templatebased Natural Language Generator (NLG) can be easily constructed for converting dialogue acts into natural language. 4.2 Belief Trackers The InfoBot consists of M belief trackers, one for each slot, which get the user input xt and produce two outputs, pt j and qt j, which we shall collectively call the belief state: pt j is a multinomial distribution over the slot values v, and qt j is a scalar probability of the user knowing the value of slot j. We describe two versions of the belief tracker. Hand-Crafted Tracker: We first identify mentions of slot-names (such as “actor”) or slot-values (such as “Bill Murray”) from the user input ut, using token-level keyword search. Let {w ∈x} denote the set of tokens in a string x3, then for each slot in 1 ≤j ≤M and each value v ∈V j, we compute its matching score as follows: st j[v] = |{w ∈ut} ∩{w ∈v}| |{w ∈v}| (4) A similar score bt j is computed for the slot-names. A one-hot vector reqt ∈{0, 1}M denotes the previously requested slot from the agent, if any. qt j is set to 0 if reqt[j] is 1 but st j[v] = 0 ∀v ∈V j, i.e. the agent requested for a slot but did not receive a valid value in return, else it is set to 1. Starting from an prior distribution p0 j (based on the counts of the values in the KB), pt j[v] is updated as: pt j[v] ∝pt−1 j [v] + C st j[v] + bt j + 1(reqt[j] = 1)  (5) Here C is a tuning parameter, and the normalization is given by setting the sum over v to 1. Neural Belief Tracker: For the neural tracker the user input ut is converted to a vector representation xt, using a bag of n-grams (with n = 2) representation. Each element of xt is an integer indicating the count of a particular n-gram in ut. We let V n denote the number of unique n-grams, hence xt ∈NV n 0 . Recurrent neural networks have been used for belief tracking (Henderson et al., 2014; Wen et al., 2016b) since the output distribution at turn t depends on all user inputs till that turn. We use a Gated Recurrent Unit (GRU) (Cho et al., 2014) for each tracker, which, starting from h0 j = 0 computes ht j = GRU(x1, . . . , xt) (see Appendix B for details). ht j ∈Rd can be interpreted as a summary of what the user has said about slot j till turn t. The belief states are computed from this vector as follows: pt j = softmax(W p j ht j + bp j) (6) qt j = σ(W Φ j ht j + bΦ j ) (7) Here W p j ∈RV j×d, bp j ∈RV j, W Φ j ∈Rd and bΦ j ∈R, are trainable parameters. 4.3 Soft-KB Lookup + Summary This module uses the Soft-KB lookup described in section 3.3 to compute the posterior pt T ∈RN over the EC-KB from the belief states (pt j, qt j). 3We use the NLTK tokenizer available at http://www. nltk.org/api/nltk.tokenize.html 487 Collectively, outputs of the belief trackers and the soft-KB lookup can be viewed as the current dialogue state internal to the KB-InfoBot. Let st = [pt 1, pt 2, ..., pt M, qt 1, qt 2, ..., qt M, pt T ] be the vector of size P j V j +M +N denoting this state. It is possible for the agent to directly use this state vector to select its next action at. However, the large size of the state vector would lead to a large number of parameters in the policy network. To improve efficiency we extract summary statistics from the belief states, similar to (Williams and Young, 2005). Each slot is summarized into an entropy statistic over a distribution wt j computed from elements of the KB posterior pt T as follows: wt j(v) ∝ X i:Ti,j=v pt T (i) + p0 j(v) X i:Ti,j=Ψ pt T (i) . (8) Here, p0 j is a prior distribution over the values of slot j, estimated using counts of each value in the KB. The probability mass of v in this distribution is the agent’s confidence that the user goal has value v in slot j. This two terms in (8) correspond to rows in KB which have value v, and rows whose value is unknown (weighted by the prior probability that an unknown might be v). Then the summary statistic for slot j is the entropy H(wt j). The KB posterior pt T is also summarized into an entropy statistic H(pt T ). The scalar probabilities qt j are passed as is to the dialogue policy, and the final summary vector is ˜st = [H(˜pt 1), ..., H(˜pt M), qt 1, ..., qt M, H(pt T )]. Note that this vector has size 2M + 1. 4.4 Dialogue Policy The dialogue policy’s job is to select the next action based on the current summary state ˜st and the dialogue history. We present a hand-crafted baseline and a neural policy network. Hand-Crafted Policy: The rule based policy is adapted from (Wu et al., 2015). It asks for the slot ˆj = arg min H(˜pt j) with the minimum entropy, except if – (i) the KB posterior entropy H(pt T ) < αR, (ii) H(˜pt j) < min(αT , βH(˜p0 j), (iii) slot j has already been requested Q times. αR, αT , β, Q are tuned to maximize reward against the simulator. Neural Policy Network: For the neural approach, similar to (Williams and Zweig, 2016; Zhao and Eskenazi, 2016), we use an RNN to allow the network to maintain an internal state of dialogue history. Specifically, we use a GRU unit followed by a fully-connected layer and softmax nonlinearity to model the policy π over actions in A (W π ∈R|A|×d, bπ ∈R|A|): ht π = GRU(˜s1, ..., ˜st) (9) π = softmax(W πht π + bπ) . (10) During training, the agent samples its actions from the policy to encourage exploration. If this action is inform(), it must also provide an ordered set of entities indexed by I = (i1, i2, . . . , iR) in the KB to the user. This is done by sampling R items from the KB-posterior pt T . This mimics a search engine type setting, where R may be the number of results on the first page. 5 Training Parameters of the neural components (denoted by θ) are trained using the REINFORCE algorithm (Williams, 1992). We assume that the learner has access to a reward signal rt throughout the course of the dialogue, details of which are in the next section. We can write the expected discounted return of the agent under policy π as J(θ) = Eπ hPH t=0 γtrt i (γ is the discounting factor). We also use a baseline reward signal b, which is the average of all rewards in a batch, to reduce the variance in the updates (Greensmith et al., 2004). When only training the dialogue policy π using this signal, updates are given by (details in Appendix C): ∇θJ(θ) = Eπ h H X k=0 ∇θ log πθ(ak) H X t=0 γt(rt−b) i , (11) For end-to-end training we need to update both the dialogue policy and the belief trackers using the reinforcement signal, and we can view the retrieval as another policy µθ (see Appendix C). The updates are given by: ∇θJ(θ) =Ea∼π,I∼µ h∇θ log µθ(I)+ H X h=0 ∇θ log πθ(ah)  H X k=0 γk(rk −b) i , (12) In the case of end-to-end learning, we found that for a moderately sized KB, the agent almost always fails if starting from random initialization. 488 In this case, credit assignment is difficult for the agent, since it does not know whether the failure is due to an incorrect sequence of actions or incorrect set of results from the KB. Hence, at the beginning of training we have an Imitation Learning (IL) phase where the belief trackers and policy network are trained to mimic the hand-crafted agents. Assume that ˆpt j and ˆqt j are the belief states from a rule-based agent, and ˆat its action at turn t. Then the loss function for imitation learning is: L(θ) = E  D(ˆpt j||pt j(θ))+H(ˆqt j, qt j(θ))−log πθ(ˆat)  D(p||q) and H(p, q) denote the KL divergence and cross-entropy between p and q respectively. The expectations are estimated using a minibatch of dialogues of size B. For RL we use RMSProp (Hinton et al., 2012) and for IL we use vanilla SGD updates to train the parameters θ. 6 Experiments and Results Previous work in KB-based QA has focused on single-turn interactions and is not directly comparable to the present study. Instead we compare different versions of the KB-InfoBot described above to test our claims. 6.1 KB-InfoBot versions We have described two belief trackers – (A) HandCrafted and (B) Neural, and two dialogue policies – (C) Hand-Crafted and (D) Neural. Rule agents use the hand-crafted belief trackers and hand-crafted policy (A+C). RL agents use the hand-crafted belief trackers and the neural policy (A+D). We compare three variants of both sets of agents, which differ only in the inputs to the dialogue policy. The No-KB version only takes entropy H(ˆpt j) of each of the slot distributions. The Hard-KB version performs a hard-KB lookup and selects the next action based on the entropy of the slots over retrieved results. This is the same approach as in Wen et al. (2016b), except that we take entropy instead of summing probabilities. The Soft-KB version takes summary statistics of the slots and KB posterior described in Section 4. At the end of the dialogue, all versions inform the user with the top results from the KB posterior pt T , hence the difference only lies in the policy for action selection. Lastly, the E2E agent uses the neural belief tracker and the neural policy (B+D), with a Soft-KB lookup. For the RL agents, we also append ˆqt j and a one-hot encoding of the previous KB-split N M maxj |V j| |Mj| Small 277 6 17 20% Medium 428 6 68 20% Large 857 6 101 20% X-Large 3523 6 251 20% Table 1: Movies-KB statistics for four splits. Refer to Section 3.2 for description of columns. agent action to the policy network input. Hyperparameter details for the agents are provided in Appendix D. 6.2 User Simulator Training reinforcement learners is challenging because they need an environment to operate in. In the dialogue community it is common to use simulated users for this purpose (Schatzmann et al., 2007a,b; Cuay´ahuitl et al., 2005; Asri et al., 2016). In this work we adapt the publicly-available user simulator presented in Li et al. (2016b) to follow a simple agenda while interacting with the KB-InfoBot, as well as produce natural language utterances . Details about the simulator are included in Appendix E. During training, the simulated user also provides a reward signal at the end of each dialogue. The dialogue is a success if the user target is in top R = 5 results returned by the agent; and the reward is computed as max(0, 2(1 −(r −1)/R)), where r is the actual rank of the target. For a failed dialogue the agent receives a reward of −1, and at each turn it receives a reward of −0.1 to encourage short sessions4. The maximum length of a dialogue is 10 turns beyond which it is deemed a failure. 6.3 Movies-KB We use a movie-centric KB constructed using the IMDBPy5 package. We constructed four different splits of the dataset, with increasing number of entities, whose statistics are given in Table 1. The original KB was modified to reduce the number of actors and directors in order to make the task more challenging6. We randomly remove 20% of the values from the agent’s copy of the KB to simulate a scenario where the KB may be incomplete. The user, however, may still know these values. 4A turn consists of one user action and one agent action. 5http://imdbpy.sourceforge.net/ 6We restricted the vocabulary to the first few unique values of these slots and replaced all other values with a random value from this set. 489 Agent Small KB Medium KB Large KB X-Large KB T S R T S R T S R T S R No KB Rule 5.04 .64 .26±.02 5.05 .77 .74±.02 4.93 .78 .82±.02 4.84 .66 .43±.02 RL 2.65 .56 .24±.02 3.32 .76 .87±.02 3.71 .79 .94±.02 3.64 .64 .50±.02 Hard KB Rule 5.04 .64 .25±.02 3.66 .73 .75±.02 4.27 .75 .78±.02 4.84 .65 .42±.02 RL 3.36 .62 .35±.02 3.07 .75 .86±.02 3.53 .79 .98±.02 2.88 .62 .53±.02 Soft KB Rule 2.12 .57 .32±.02 3.94 .76 .83±.02 3.74 .78 .93±.02 4.51 .66 .51±.02 RL 2.93 .63 .43±.02 3.37 .80 .98±.02 3.79 .83 1.05±.02 3.65 .68 .62±.02 E2E 3.13 .66 .48±.02 3.27 .83 1.10±.02 3.51 .83 1.10±.02 3.98 .65 .50±.02 Max 3.44 1.0 1.64 2.96 1.0 1.78 3.26 1.0 1.73 3.97 1.0 1.37 Table 2: Performance comparison. Average (±std error) for 5000 runs after choosing the best model during training. T: Average number of turns. S: Success rate. R: Average reward. 6.4 Simulated User Evaluation We compare each of the discussed versions along three metrics: the average rewards obtained (R), success rate (S) (where success is defined as providing the user target among top R results), and the average number of turns per dialogue (T). For the RL and E2E agents, during training we fix the model every 100 updates and run 2000 simulations with greedy action selection to evaluate its performance. Then after training we select the model with the highest average reward and run a further 5000 simulations and report the performance in Table 2. For reference we also show the performance of an agent which receives perfect information about the user target without any errors, and selects actions based on the entropy of the slots (Max). This can be considered as an upper bound on the performance of any agent (Wu et al., 2015). In each case the Soft-KB versions achieve the highest average reward, which is the metric all agents optimize. In general, the trade-off between minimizing average turns and maximizing success rate can be controlled by changing the reward signal. Note that, except the E2E version, all versions share the same belief trackers, but by re-asking values of some slots they can have different posteriors pt T to inform the results. This shows that having full information about the current state of beliefs over the KB helps the Soft-KB agent discover better policies. Further, reinforcement learning helps discover better policies than the handcrafted rule-based agents, and we see a higher reward for RL agents compared to Rule ones. This is due to the noisy natural language inputs; with perfect information the rule-based strategy is optimal. Interestingly, the RL-Hard agent has the minimum number of turns in 2 out of the 4 settings, at the cost of a lower success rate and average reward. This agent does not receive any information about the uncertainty in semantic parsing, and it tends to RL­Hard Rule­Soft RL­Soft E2E­Soft 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Success Rate p=0.01 ns p=0.03 109 105 121 103 RL­Hard Rule­Soft RL­Soft E2E­Soft 1 2 3 4 5 6 7 8 9 10 # Turns Figure 3: Performance of KB-InfoBot versions when tested against real users. Left: Success rate, with the number of test dialogues indicated on each bar, and the p-values from a two-sided permutation test. Right: Distribution of the number of turns in each dialogue (differences in mean are significant with p < 0.01). inform as soon as the number of retrieved results becomes small, even if they are incorrect. Among the Soft-KB agents, we see that E2E>RL>Rule, except for the X-Large KB. For E2E, the action space grows exponentially with the size of the KB, and hence credit assignment gets more difficult. Future work should focus on improving the E2E agent in this setting. The difficulty of a KB-split depends on number of entities it has, as well as the number of unique values for each slot (more unique values make the problem easier). Hence we see that both the “Small” and “X-Large” settings lead to lower reward for the agents, since maxj |V j| N is small for them. 6.5 Human Evaluation We further evaluate the KB-InfoBot versions trained using the simulator against real subjects, recruited from the author’s affiliations. In each session, in a typed interaction, the subject was first presented with a target movie from the “Medium” KB-split along with a subset of its associated slot490 Turn Dialogue Rank Dialogue Rank Dialogue Rank 1 can i get a movie directed by maiellaro 75 find a movie directed by hemecker 7 peter greene acted in a family comedy - what was it? 35 request actor request actor request actor 2 neal 2 i dont know 7 peter 28 request mpaa_rating request mpaa_rating request mpaa_rating 3 not sure about that 2 i dont know 7 i don't know that 28 request critic_rating request critic_rating request critic_rating 4 i don't remember 2 7.6 13 the critics rated it as 6.5 3 request genre request critic_rating inform 5 i think it's a crime movie 1 7.9 23 inform request critic_rating 6 7.7 41 inform Figure 4: Sample dialogues between users and the KB-InfoBot (RL-Soft version). Each turn begins with a user utterance followed by the agent response. Rank denotes the rank of the target movie in the KB-posterior after each turn. values from the KB. To simulate the scenario where end-users may not know slot values correctly, the subjects in our evaluation were presented multiple values for the slots from which they could choose any one while interacting with the agent. Subjects were asked to initiate the conversation by specifying some of these values, and respond to the agent’s subsequent requests, all in natural language. We test RL-Hard and the three Soft-KB agents in this study, and in each session one of the agents was picked at random for testing. In total, we collected 433 dialogues, around 20 per subject. Figure 3 shows a comparison of these agents in terms of success rate and number of turns, and Figure 4 shows some sample dialogues from the user interactions with RL-Soft. In comparing Hard-KB versus Soft-KB lookup methods we see that both Rule-Soft and RL-Soft agents achieve a higher success rate than RL-Hard, while E2E-Soft does comparably. They do so in an increased number of average turns, but achieve a higher average reward as well. Between RL-Soft and Rule-Soft agents, the success rate is similar, however the RL agent achieves that rate in a lower number of turns on average. RL-Soft achieves a success rate of 74% on the human evaluation and 80% against the simulated user, indicating minimal overfitting. However, all agents take a higher number of turns against real users as compared to the simulator, due to the noisier inputs. The E2E gets the highest success rate against the simulator, however, when tested against real users it performs poorly with a lower success rate and a higher number of turns. Since it has more trainable components, this agent is also most prone to overfitting. In particular, the vocabulary of the simulator it is trained against is quite limited (V n = 3078), and hence when real users 1.0 1.5 2.0 NLG Temperature 0.2 0.4 0.6 0.8 1.0 1.2 Average Reward RL­Hard RL­Soft End2End Figure 5: Average rewards against simulator as temperature of softmax in NLG output is increased. Higher temperature leads to more noise in output. Average over 5000 simulations after selecting the best model during training. provided inputs outside this vocabulary, it performed poorly. In the future we plan to fix this issue by employing a better architecture for the language understanding and belief tracker components Hakkani-T¨ur et al. (2016); Liu and Lane (2016); Chen et al. (2016b,a), as well as by pretraining on separate data. While its generalization performance is poor, the E2E system also exhibits the strongest learning capability. In Figure 5, we compare how different agents perform against the simulator as the temperature of the output softmax in its NLG is increased. A higher temperature means a more uniform output distribution, which leads to generic simulator responses irrelevant to the agent questions. This is a simple way of introducing noise in the utterances. The performance of all agents drops as the temperature is increased, but less so for the E2E agent, which can adapt its belief tracker to the inputs it receives. Such adaptation 491 is key to the personalization of dialogue agents, which motivates us to introduce the E2E agent. 7 Conclusions and Discussion This work is aimed at facilitating the move towards end-to-end trainable dialogue agents for information access. We propose a differentiable probabilistic framework for querying a database given the agent’s beliefs over its fields (or slots). We show that such a framework allows the downstream reinforcement learner to discover better dialogue policies by providing it more information. We also present an E2E agent for the task, which demonstrates a strong learning capacity in simulations but suffers from overfitting when tested on real users. Given these results, we propose the following deployment strategy that allows a dialogue system to be tailored to specific users via learning from agent-user interactions. The system could start off with an RL-Soft agent (which gives good performance out-of-the-box). As the user interacts with this agent, the collected data can be used to train the E2E agent, which has a strong learning capability. Gradually, as more experience is collected, the system can switch from RL-Soft to the personalized E2E agent. Effective implementation of this, however, requires the E2E agent to learn quickly and this is the research direction we plan to focus on in the future. Acknowledgements We would like to thank Dilek Hakkani-T¨ur and reviewers for their insightful comments on the paper. We would also like to acknowledge the volunteers from Carnegie Mellon University and Microsoft Research for helping us with the human evaluation. Yun-Nung Chen is supported by the Ministry of Science and Technology of Taiwan under the contract number 105-2218-E-002-033, Institute for Information Industry, and MediaTek. References Layla El Asri, Jing He, and Kaheer Suleman. 2016. A sequence-to-sequence model for user simulation in spoken dialogue systems. arXiv preprint arXiv:1607.00070 . Antoine Bordes and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683 . Yun-Nung Chen, Dilek Hakanni-T¨ur, Gokhan Tur, Asli Celikyilmaz, Jianfeng Guo, and Li Deng. 2016a. Syntax or semantics? knowledge-guided joint semantic frame parsing. Yun-Nung Chen, Dilek Hakkani-T¨ur, Gokhan Tur, Jianfeng Gao, and Li Deng. 2016b. End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding. In Proceedings of The 17th Annual Meeting of the International Speech Communication Association. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP . Heriberto Cuay´ahuitl, Steve Renals, Oliver Lemon, and Hiroshi Shimodaira. 2005. Human-computer dialogue simulation using hidden markov models. In Automatic Speech Recognition and Understanding, 2005 IEEE Workshop on. IEEE, pages 290–295. M Gaˇsi´c, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thomson, Pirros Tsiakoulis, and Steve Young. 2013. Online policy optimisation of bayesian spoken dialogue systems via human interaction. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, pages 8367–8371. Peter W Glynn. 1990. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM 33(10):75–84. Evan Greensmith, Peter L Bartlett, and Jonathan Baxter. 2004. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research 5(Nov):1471–1530. Dilek Hakkani-T¨ur, Gokhan Tur, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and YeYi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM. In Proceedings of The 17th Annual Meeting of the International Speech Communication Association. Matthew Henderson. 2015. Machine learning for dialog state tracking: A review. Machine Learning in Spoken Language Processing Workshop . Matthew Henderson, Blaise Thomson, and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). pages 292– 299. Geoffrey Hinton, N Srivastava, and Kevin Swersky. 2012. Lecture 6a overview of mini–batch gradient descent. Coursera Lecture slides https://class. coursera. org/neuralnets-2012-001/lecture,[Online . Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016a. Deep reinforcement learning for dialogue generation. EMNLP . 492 Xiujun Li, Zachary C Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, and Yun-Nung Chen. 2016b. A user simulator for task-completion dialogues. arXiv preprint arXiv:1612.05688 . Xuijun Li, Yun-Nung Chen, Lihong Li, and Jianfeng Gao. 2017. End-to-end task-completion neural dialogue systems. arXiv preprint arXiv:1703.01008 . Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. Interspeech 2016 pages 685–689. Sebastian M¨oller, Roman Englert, Klaus-Peter Engelbrecht, Verena Vanessa Hafner, Anthony Jameson, Antti Oulasvirta, Alexander Raake, and Norbert Reithinger. 2006. Memo: towards automatic usability evaluation of spoken dialogue services by user error simulations. In INTERSPEECH. Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007a. Agenda-based user simulation for bootstrapping a pomdp dialogue system. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers. Association for Computational Linguistics, pages 149–152. Jost Schatzmann, Blaise Thomson, and Steve Young. 2007b. Statistical user simulation with a hidden agenda. Proc SIGDial, Antwerp 273282(9). Konrad Scheffler and Steve Young. 2002. Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning. In Proceedings of the second international conference on Human Language Technology Research. Morgan Kaufmann Publishers Inc., pages 12–19. Amanda Spink, Dietmar Wolfram, Major BJ Jansen, and Tefko Saracevic. 2001. Searching the web: The public and their queries. Journal of the Association for Information Science and Technology 52(3):226– 234. Marilyn A Walker, Diane J Litman, Candace A Kamm, and Alicia Abella. 1997. Paradise: A framework for evaluating spoken dialogue agents. In Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 271–280. Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. 2016a. Conditional generation and snapshot learning in neural dialogue systems. EMNLP . Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. 2016b. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562 . Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. EMNLP . Jason D Williams and Steve Young. 2005. Scaling up POMDPs for dialog management: The “Summary POMDP” method. In IEEE Workshop on Automatic Speech Recognition and Understanding, 2005.. IEEE, pages 177–182. Jason D Williams and Geoffrey Zweig. 2016. Endto-end lstm-based dialog control optimized with supervised and reinforcement learning. arXiv preprint arXiv:1606.01269 . Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning 8(3-4):229–256. Ji Wu, Miao Li, and Chin-Hui Lee. 2015. A probabilistic framework for representing dialog systems and entropy-based dialog management through dynamic stochastic state evolution. IEEE/ACM Transactions on Audio, Speech, and Language Processing 23(11):2026–2035. Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. 2014. Spoken language understanding using long short-term memory neural networks. In Spoken Language Technology Workshop (SLT), 2014 IEEE. IEEE, pages 189–194. Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016a. Neural generative question answering. International Joint Conference on Artificial Intelligence . Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2016b. Neural enquirer: Learning to query tables. International Joint Conference on Artificial Intelligence . Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. POMDP-based statistical spoken dialog systems: A review. Proceedings of the IEEE 101(5):1160–1179. Wojciech Zaremba and Ilya Sutskever. 2015. Reinforcement learning neural Turing machines-revised. arXiv preprint arXiv:1505.00521 . Tiancheng Zhao and Maxine Eskenazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. arXiv preprint arXiv:1606.02560 . Stefan Zwicklbauer, Christin Seifert, and Michael Granitzer. 2013. Do we need entity-centric knowledge bases for entity disambiguation? In Proceedings of the 13th International Conference on Knowledge Management and Knowledge Technologies. ACM, page 4. 493 A Posterior Derivation Here, we present a derivation for equation 3, i.e., the posterior over the KB slot when the user knows the value of that slot. For brevity, we drop Φj = 0 from the condition in all probabilities below. For the case when i ∈Mj, we can write: Pr(Gj = i) = Pr(Gj ∈Mj) Pr(Gj = i|Gj ∈Mj) = |Mj| N 1 |Mj| = 1 N , (13) where we assume all missing values to be equally likely, and estimate the prior probability of the goal being missing from the count of missing values in that slot. For the case when i = v ̸∈Mj: Pr(Gj = i) = Pr(Gj ̸∈Mj) Pr(Gj = i|Gj ̸∈Mj) =  1 −|Mj| N  × pt j(v) Nj(v) , (14) where the second term comes from taking the probability mass associated with v in the belief tracker and dividing it equally among all rows with value v. We can also verify that the above distribution is valid: i.e., it sums to 1: X i Pr(Gj = i) = X i∈Mj Pr(Gj = i) + X i̸∈Mj Pr(Gj = i) = X i∈Mj 1 N + X i̸∈Mj  1 −|Mj| N  pt j(v) #jv = |Mj| N +  1 −|Mj| N  X i̸∈Mj pt j(v) #jv = |Mj| N +  1 −|Mj| N  X i∈V j #jv pt j(v) #jv = |Mj| N +  1 −|Mj| N  × 1 = 1 . B Gated Recurrent Units A Gated Recurrent Unit (GRU) (Cho et al., 2014) is a recurrent neural network which operates on an input sequence x1, . . . , xt. Starting from an initial state h0 (usually set to 0 it iteratively computes the final output ht as follows: rt = σ(Wrxt + Urht−1 + br) zt = σ(Wzxt + Uzht−1 + bz) ˜ht = tanh(Whxt + Uh(rt ⊙ht−1) + bh) ht = (1 −zt) ⊙ht−1 + zt ⊙˜ht . (15) Here σ denotes the sigmoid nonlinearity, and ⊙an element-wise product. C REINFORCE updates We assume that the learner has access to a reward signal rt throughout the course of the dialogue, details of which are in the next section. We can write the expected discounted return of the agent under policy π as follows: J(θ) = E " H X t=0 γtrt # (16) Here, the expectation is over all possible trajectories τ of the dialogue, θ denotes the trainable parameters of the learner, H is the maximum length of an episode, and γ is the discounting factor. We can use the likelihood ratio trick (Glynn, 1990) to write the gradient of the objective as follows: ∇θJ(θ) = E " ∇θ log pθ(τ) H X t=0 γtrt # , (17) where pθ(τ) is the probability of observing a particular trajectory under the current policy. With a Markovian assumption, we can write pθ(τ) = p(s0) H Y k=0 p(sk+1|sk, ak)πθ(ak|sk), (18) where θ denotes dependence on the neural network parameters. From 17,18 we obtain ∇θJ(θ) = Ea∼π h H X k=0 ∇θ log πθ(ak) H X t=0 γtrt i , (19) If we need to train both the policy network and the belief trackers using the reinforcement signal, we can view the KB posterior pt T as another policy. During training then, to encourage exploration, when the agent selects the inform action we 494 sample R results from the following distribution to return to the user: µ(I) = pt T (i1) × pt T (i2) 1 −pt T (i1) × · · · . (20) This formulation also leads to a modified version of the episodic REINFORCE update rule (Williams, 1992). Specifically, eq. 18 now becomes, pθ(τ) = " p(s0) H Y k=0 p(sk+1|sk, ak)πθ(ak|sk) # µθ(I), (21) Notice the last term µθ above which is the posterior of a set of results from the KB. From 17,21 we obtain ∇θJ(θ) =Ea∼π,I∼µ h∇θ log µθ(I)+ H X h=0 ∇θ log πθ(ah)  H X k=0 γkrk i , (22) D Hyperparameters We use GRU hidden state size of d = 50 for the RL agents and d = 100 for the E2E, a learning rate of 0.05 for the imitation learning phase and 0.005 for the reinforcement learning phase, and minibatch size 128. For the rule agents, hyperparameters were tuned to maximize the average reward of each agent in simulations. For the E2E agent, imitation learning was performed for 500 updates, after which the agent switched to reinforcement learning. The input vocabulary is constructed from the NLG vocabulary and bigrams in the KB, and its size is 3078. E User Simulator At the beginning of each dialogue, the simulated user randomly samples a target entity from the ECKB and a random combination of informable slots for which it knows the value of the target. The remaining slot-values are unknown to the user. The user initiates the dialogue by providing a subset of its informable slots to the agent and requesting for an entity which matches them. In subsequent turns, if the agent requests for the value of a slot, the user complies by providing it or informs the agent that it does not know that value. If the agent informs results from the KB, the simulator checks whether the target is among them and provides the reward. We convert dialogue acts from the user into natural language utterances using a separately trained natural language generator (NLG). The NLG is trained in a sequence-to-sequence fashion, using conversations between humans collected by crowd-sourcing. It takes the dialogue actions (DAs) as input, and generates template-like sentences with slot placeholders via an LSTM decoder. Then, a post-processing scan is performed to replace the slot placeholders with their actual values, which is similar to the decoder module in (Wen et al., 2015, 2016a). In the LSTM decoder, we apply beam search, which iteratively considers the top k best sentences up to time step t when generating the token of the time step t+1. For the sake of the trade-off between the speed and performance, we use the beam size of 3 in the following experiments. There are several sources of error in user utterances. Any value provided by the user may be corrupted by noise, or substituted completely with an incorrect value of the same type (e.g., “Bill Murray” might become just “Bill” or “Tom Cruise”). The NLG described above is inherently stochastic, and may sometimes generate utterances irrelevant to the agent request. By increasing the temperature of the output softmax in the NLG we can increase the noise in user utterances. 495
2017
45
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 496–505 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1046 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 496–505 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1046 Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots Yu Wu†, Wei Wu‡, Chen Xing♦, Zhoujun Li†∗, Ming Zhou‡ †State Key Lab of Software Development Environment, Beihang University, Beijing, China ♦College of Computer and Control Engineering, Nankai University, Tianjin, China ‡ Microsoft Research, Beijing, China {wuyu,lizj}@buaa.edu.cn {wuwei,v-chxing,mingzhou}@microsoft.com Abstract We study response selection for multiturn conversation in retrieval-based chatbots. Existing work either concatenates utterances in context or matches a response with a highly abstract context vector finally, which may lose relationships among utterances or important contextual information. We propose a sequential matching network (SMN) to address both problems. SMN first matches a response with each utterance in the context on multiple levels of granularity, and distills important matching information from each pair as a vector with convolution and pooling operations. The vectors are then accumulated in a chronological order through a recurrent neural network (RNN) which models relationships among utterances. The final matching score is calculated with the hidden states of the RNN. An empirical study on two public data sets shows that SMN can significantly outperform stateof-the-art methods for response selection in multi-turn conversation. 1 Introduction Conversational agents include task-oriented dialog systems and non-task-oriented chatbots. Dialog systems focus on helping people complete specific tasks in vertical domains (Young et al., 2010), while chatbots aim to naturally and meaningfully converse with humans on open domain topics (Ritter et al., 2011). Existing work on building chatbots includes generation -based methods and retrieval-based methods. Retrieval based chatbots enjoy the advantage of informative and fluent responses, because they select a proper response for ∗Corresponding Author Context utterance 1 Human: How are you doing? utterance 2 ChatBot: I am going to hold a drum class in Shanghai. Anyone wants to join? The location is near Lujiazui. utterance 3 Human: Interesting! Do you have coaches who can help me practice drum? utterance 4 ChatBot: Of course. utterance 5 Human: Can I have a free first lesson? Response Candidates response 1 Sure. Have you ever played drum before? ✓ response 2 What lessons do you want?  Table 1: An example of multi-turn conversation the current conversation from a repository with response selection algorithms. While most existing work on retrieval-based chatbots studies response selection for single-turn conversation (Wang et al., 2013) which only considers the last input message, we consider the problem in a multi-turn scenario. In a chatbot, multi-turn response selection takes a message and utterances in its previous turns as input and selects a response that is natural and relevant to the whole context. The key to response selection lies in inputresponse matching. Different from single-turn conversation, multi-turn conversation requires matching between a response and a conversation context in which one needs to consider not only the matching between the response and the input message but also matching between responses and utterances in previous turns. The challenges of the task include (1) how to identify important information (words, phrases, and sentences) in context, which is crucial to selecting a proper response and leveraging relevant information in matching; and (2) how to model relationships among the utterances in the context. Table 1 illustrates the challenges with an example. First, “hold a drum class” and “drum” in context are very important. Without them, one may find responses relevant to the message (i.e., the fifth utterance of the context) but nonsense in the context (e.g., “what lessons do you want?”). Second, the message highly depends on the second utterance in the context, and 496 the order of the utterances matters in response selection: exchanging the third utterance and the fifth utterance may lead to different responses. Existing work, however, either ignores relationships among utterances when concatenating them together (Lowe et al., 2015), or loses important information in context in the process of converting the whole context to a vector without enough supervision from responses (e.g., by a hierarchical RNN (Zhou et al., 2016)). We propose a sequential matching network (SMN), a new context based matching model that can tackle both challenges in an end-to-end way. The reason that existing models lose important information in the context is that they first represent the whole context as a vector and then match the context vector with a response vector. Thus, responses in these models connect with the context until the final step in matching. To avoid information loss, SMN matches a response with each utterance in the context at the beginning and encodes important information in each pair into a matching vector. The matching vectors are then accumulated in the utterances’ temporal order to model their relationships. The final matching degree is computed with the accumulation of the matching vectors. Specifically, for each utterance-response pair, the model constructs a word-word similarity matrix and a sequence-sequence similarity matrix by the word embeddings and the hidden states of a recurrent neural network with gated recurrent units (GRU) (Chung et al., 2014) respectively. The two matrices capture important matching information in the pair on a word level and a segment (word subsequence) level respectively, and the information is distilled and fused as a matching vector through an alternation of convolution and pooling operations on the matrices. By this means, important information from multiple levels of granularity in context is recognized under sufficient supervision from the response and carried into matching with minimal loss. The matching vectors are then uploaded to another GRU to form a matching score for the context and the response. The GRU accumulates the pair matching in its hidden states in the chronological order of the utterances in context. It models relationships and dependencies among the utterances in a matching fashion and has the utterance order supervise the accumulation of pair matching. The matching degree of the context and the response is computed by a logit model with the hidden states of the GRU. SMN extends the powerful “2D” matching paradigm in text pair matching for single-turn conversation to context based matching for multi-turn conversation, and enjoys the advantage of both important information in utterance-response pairs and relationships among utterances being sufficiently preserved and leveraged in matching. We test our model on the Ubuntu dialogue corpus (Lowe et al., 2015) which is a large scale publicly available English data set for research in multi-turn conversation. The results show that our model can significantly outperform state-ofthe-art methods, and improvement to the best baseline model on R10@1 is over 6%. In addition to the Ubuntu corpus, we create a human-labeled Chinese data set, namely the Douban Conversation Corpus, and test our model on it. In contrast to the Ubuntu corpus in which data is collected from a specific domain and negative candidates are randomly sampled, conversations in this data come from the open domain, and response candidates in this data set are collected from a retrieval engine and labeled by three human judges. On this data, our model improves the best baseline model by 3% on R10@1 and 4% on P@1. As far as we know, Douban Conversation Corpus is the first human-labeled data set for multi-turn response selection and could be a good complement to the Ubuntu corpus. We have released Douban Conversation Corups and our source code at https://github.com/MarkWuNLP/ MultiTurnResponseSelection Our contributions in this paper are three-folds: (1) the proposal of a new context based matching model for multi-turn response selection in retrieval-based chatbots; (2) the publication of a large human-labeled data set to research communities; (3) empirical verification of the effectiveness of the model on public data sets. 2 Related Work Recently, building a chatbot with data driven approaches (Ritter et al., 2011; Ji et al., 2014) has drawn significant attention. Existing work along this line includes retrieval-based methods (Hu et al., 2014; Ji et al., 2014; Wang et al., 2015; Yan et al., 2016; Wu et al., 2016b; Zhou et al., 2016; Wu et al., 2016a) and generation-based methods (Shang et al., 2015; Serban et al., 2015; Vinyals and Le, 2015; Li et al., 2015, 2016; Xing et al., 497 .... .... .... Score 1 2 , M M Convolution Pooling ( ) L .... .... .... 1 u 1 nu  n u r Word Embedding GRU1 GRU2 .... 1 v 1 nv  nv 1 'n h  Utterance-Response Matching (First Layer) Matching Accumulation (Second Layer) Segment Pairs Word Pairs Matching Prediction (Third Layer) 1' h 'n h Figure 1: Architecture of SMN 2016; Serban et al., 2016). Our work is a retrievalbased method, in which we study context-based response selection. Early studies of retrieval-based chatbots focus on response selection for single-turn conversation (Wang et al., 2013; Ji et al., 2014; Wang et al., 2015; Wu et al., 2016b). Recently, researchers have begun to pay attention to multi-turn conversation. For example, Lowe et al. (2015) match a response with the literal concatenation of context utterances. Yan et al. (2016) concatenate context utterances with the input message as reformulated queries and perform matching with a deep neural network architecture. Zhou et al. (2016) improve multi-turn response selection with a multi-view model including an utterance view and a word view. Our model is different in that it matches a response with each utterance at first and accumulates matching information instead of sentences by a GRU, thus useful information for matching can be sufficiently retained. 3 Sequential Matching Network 3.1 Problem Formalization Suppose that we have a data set D = {(yi, si, ri)}N i=1, where si = {ui,1, . . . , ui,ni} represents a conversation context with {ui,k}ni k=1 as utterances. ri is a response candidate and yi ∈ {0, 1} denotes a label. yi = 1 means ri is a proper response for si, otherwise yi = 0. Our goal is to learn a matching model g(·, ·) with D. For any context-response pair (s, r), g(s, r) measures the matching degree between s and r. 3.2 Model Overview We propose a sequential matching network (SMN) to model g(·, ·). Figure 1 gives the architecture. SMN first decomposes context-response matching into several utterance-response pair matching and then all pairs matching are accumulated as a context based matching through a recurrent neural network. SMN consists of three layers. The first layer matches a response candidate with each utterance in the context on a word level and a segment level, and important matching information from the two levels is distilled by convolution, pooling and encoded in a matching vector. The matching vectors are then fed into the second layer where they are accumulated in the hidden states of a recurrent neural network with GRU following the chronological order of the utterances in the context. The third layer calculates the final matching score with the hidden states of the second layer. SMN enjoys several advantages over existing models. First, a response candidate can match each utterance in the context at the very beginning, thus matching information in every utteranceresponse pair can be sufficiently extracted and carried to the final matching score with minimal loss. Second, information extraction from each utterance is conducted on different levels of granularity and under sufficient supervision from the response, thus semantic structures that are useful for response selection in each utterance can be well identified and extracted. Third, matching and utterance relationships are coupled rather than separately modeled, thus utterance relationships (e.g., order), as a kind of knowledge, can supervise the formation of the matching score. By taking utterance relationships into account, SMN extends the “2D” matching that has proven effective in text pair matching for single-turn response selection to sequential “2D” matching for 498 context based matching in response selection for multi-turn conversation. In the following sections, we will describe details of the three layers. 3.3 Utterance-Response Matching Given an utterance u in a context s and a response candidate r, the model looks up an embedding table and represents u and r as U = [eu,1, . . . , eu,nu] and R = [er,1, . . . , er,nr] respectively, where eu,i, er,i ∈Rd are the embeddings of the i-th word of u and r respectively. U ∈ Rd×nu and R ∈Rd×nr are then used to construct a word-word similarity matrix M1 ∈Rnu×nr and a sequence-sequence similarity matrix M2 ∈Rnu×nr which are two input channels of a convolutional neural network (CNN). The CNN distills important matching information from the matrices and encodes the information into a matching vector v. Specifically, ∀i, j, the (i, j)-th element of M1 is defined by e1,i,j = e⊤ u,i · er,j. (1) M1 models the matching between u and r on a word level. To construct M2, we first employ a GRU to transform U and R to hidden vectors. Suppose that Hu = [hu,1, . . . , hu,nu] are the hidden vectors of U, then ∀i, hu,i ∈Rm is defined by zi = σ(Wzeu,i + Uzhu,i−1) ri = σ(Wreu,i + Urhu,i−1) ehu,i = tanh(Wheu,i + Uh(ri ⊙hu,i−1)) hu,i = zi ⊙ehu,i + (1 −zi) ⊙hu,i−1, (2) where hu,0 = 0, zi and ri are an update gate and a reset gate respectively, σ(·) is a sigmoid function, and Wz, Wh, Wr, Uz, Ur,Uh are parameters. Similarly, we have Hr = [hr,1, . . . , hr,nr] as the hidden vectors of R. Then, ∀i, j, the (i, j)-th element of M2 is defined by e2,i,j = h⊤ u,iAhr,j, (3) where A ∈Rm×m is a linear transformation. ∀i, GRU models the sequential relationship and the dependency among words up to position i and encodes the text segment until the i-th word to a hidden vector. Therefore, M2 models the matching between u and r on a segment level. M1 and M2 are then processed by a CNN to form v. ∀f = 1, 2, CNN regards Mf as an input channel, and alternates convolution and max-pooling operations. Suppose that z(l,f) = h z(l,f) i,j i I(l,f)×J(l,f) denotes the output of feature maps of type-f on layer-l, where z(0,f) = Mf, ∀f = 1, 2. On the convolution layer, we employ a 2D convolution operation with a window size r(l,f) w × r(l,f) h , and define z(l,f) i,j as z(l,f) i,j = σ( Fl−1 X f′=0 r(l,f) wX s=0 r(l,f) hX t=0 W(l,f) s,t · z(l−1,f′) i+s,j+t + bl,k), (4) where σ(·) is a ReLU, W(l,f) ∈Rr(l,f) w ×r(l,f) h and bl,k are parameters, and Fl−1 is the number of feature maps on the (l −1)-th layer. A max pooling operation follows a convolution operation and can be formulated as z(l,f) i,j = max p(l,f) w >s≥0 max p(l,f) h >t≥0 zi+s,j+t, (5) where p(l,f) w and p(l,f) h are the width and the height of the 2D pooling respectively. The output of the final feature maps are concatenated and mapped to a low dimensional space with a linear transformation as the matching vector v ∈Rq. According to Equation (1), (3), (4), and (5), we can see that by learning word embedding and parameters of GRU from training data, words or segments in an utterance that are useful for recognizing the appropriateness of a response may have high similarity with some words or segments in the response and result in high value areas in the similarity matrices. These areas will be transformed and selected by convolution and pooling operations and carry important information in the utterance to the matching vector. This is how our model identifies important information in context and leverage it in matching under the supervision of the response. We consider multiple channels because we want to capture important matching information on multiple levels of granularity of text. 3.4 Matching Accumulation Suppose that [v1, . . . , vn] is the output of the first layer (corresponding to n pairs), at the second layer, a GRU takes [v1, . . . , vn] as an input and encodes the matching sequence into its hidden states Hm = [h′ 1, . . . , h′ n] ∈Rq×n with a detailed parameterization similar to Equation (2). This layer has two functions: (1) it models the dependency and the temporal relationship of utterances in the 499 context; (2) it leverages the temporal relationship to supervise the accumulation of the pair matching as a context based matching. Moreover, from Equation (2), we can see that the reset gate (i.e., ri) and the update gate (i.e., zi) control how much information from the previous hidden state and the current input flows to the current hidden state, thus important matching vectors (corresponding to important utterances) can be accumulated while noise in the vectors can be filtered out. 3.5 Matching Prediction and Learning With [h′ 1, . . . , h′ n], we define g(s, r) as g(s, r) = softmax(W2L[h′ 1, . . . , h′ n] + b2), (6) where W2 and b2 are parameters. We consider three parameterizations for L[h′ 1, . . . , h′ n]: (1) only the last hidden state is used. Then L[h′ 1, . . . , h′ n] = h′ n. (2) the hidden states are linearly combined. Then, L[h′ 1, . . . , h′ n] = Pn i=1 wih′ i, where wi ∈R. (3) we follow (Yang et al., 2016) and employ an attention mechanism to combine the hidden states. Then, L[h′ 1, . . . , h′ n] is defined as ti = tanh(W1,1hui,nu + W1,2h′ i + b1), αi = exp(t⊤ i ts) P i(exp(t⊤ i ts)), L[h′ 1, . . . , h′ n] = n X i=1 αih′ i, (7) where W1,1 ∈Rq×m, W1,2 ∈Rq×q and b1 ∈ Rq are parameters. h′ i and hui,nu are the i-th matching vector and the final hidden state of the i-th utterance respectively. ts ∈Rq is a virtual context vector which is randomly initialized and jointly learned in training. Both (2) and (3) aim to learn weights for {h′ 1, . . . , h′ n} from training data and highlight the effect of important matching vectors in the final matching. The difference is that weights in (2) are static, because the weights are totally determined by the positions of utterances, while weights in (3) are dynamically computed by the matching vectors and utterance vectors. We denote our model with the three parameterizations of L[h′ 1, . . . , h′ n] as SMNlast, SMNstatic, and SMNdynamic, and empirically compare them in experiments. We learn g(·, ·) by minimizing cross entropy with D. Let Θ denote the parameters of SMN, then the objective function L(D, Θ) of learning can be formulated as − N X i=1 [yilog(g(si, ri)) + (1 −yi)log(1 −g(si, ri))] . (8) 4 Response Candidate Retrieval In practice, a retrieval-based chatbot, to apply the matching approach to the response selection, one needs to retrieve a number of response candidates from an index beforehand. While candidate retrieval is not the focus of the paper, it is an important step in a real system. In this work, we exploit a heuristic method to obtain response candidates from the index. Given a message un with {u1, . . . , un−1} utterances in its previous turns, we extract the top 5 keywords from {u1, . . . , un−1} based on their tf-idf scores1 and expand un with the keywords. Then we send the expanded message to the index and retrieve response candidates using the inline retrieval algorithm of the index. Finally, we use g(s, r) to rerank the candidates and return the top one as a response to the context. 5 Experiments We tested our model on a publicly available English data set and a Chinese data set published with this paper. 5.1 Ubuntu Corpus The English data set is the Ubuntu Corpus (Lowe et al., 2015) which contains multi-turn dialogues collected from chat logs of the Ubuntu Forum. The data set consists of 1 million context-response pairs for training, 0.5 million pairs for validation, and 0.5 million pairs for testing. Positive responses are true responses from humans, and negative ones are randomly sampled. The ratio of the positive and the negative is 1:1 in training, and 1:9 in validation and testing. We used the copy shared by Xu et al. (2016) 2 in which numbers, urls, and paths are replaced by special placeholders. We followed (Lowe et al., 2015) and employed recall at position k in n candidates (Rn@k) as evaluation metrics. 1Tf is word frequency in the context, while idf is calculated using the entire index. 2https://www.dropbox.com/s/ 2fdn26rj6h9bpvl/ubuntudata.zip?dl=0 500 5.2 Douban Conversation Corpus The Ubuntu Corpus is a domain specific data set, and response candidates are obtained from negative sampling without human judgment. To further verify the efficacy of our model, we created a new data set with open domain conversations, called the Douban Conversation Corpus. Response candidates in the test set of the Douban Conversation Corpus are collected following the procedure of a retrieval-based chatbot and are labeled by human judges. It simulates the real scenario of a retrievalbased chatbot. We publish it to research communities to facilitate the research of multi-turn response selection. Specifically, we crawled 1.1 million dyadic dialogues (conversation between two persons) longer than 2 turns from Douban group3 which is a popular social networking service in China. We randomly sampled 0.5 million dialogues for creating a training set, 25 thousand dialouges for creating a validation set, and 1, 000 dialogues for creating a test set, and made sure that there is no overlap between the three sets. For each dialogue in training and validation, we took the last turn as a positive response for the previous turns as a context and randomly sampled another response from the 1.1 million data as a negative response. There are 1 million context-response pairs in the training set and 50 thousand pairs in the validation set. To create the test set, we first crawled 15 million post-reply pairs from Sina Weibo4 which is the largest microblogging service in China and indexed the pairs with Lucene5. We took the last turn of each Douban dyadic dialogue in the test set as a message, retrieved 10 response candidates from the index following the method in Section 4, and finally formed a test set with 10, 000 context-response pairs. We recruited three labelers to judge if a candidate is a proper response to the context. A proper response means the response can naturally reply to the message given the whole context. Each pair received three labels and the majority of the labels were taken as the final decision. Table 2 gives the statistics of the three sets. Note that the Fleiss’ kappa (Fleiss, 1971) of the labeling is 0.41, which indicates that the three labelers reached a relatively high agreement. Besides Rn@ks, we also followed the conven3https://www.douban.com/group 4http://weibo.com/ 5https://lucenenet.apache.org/ train val test # context-response pairs 1M 50k 10k # candidates per context 2 2 10 # positive candidates per context 1 1 1.18 Min. # turns per context 3 3 3 Max. # turns per context 98 91 45 Avg. # turns per context 6.69 6.75 6.45 Avg. # words per utterance 18.56 18.50 20.74 Table 2: Statistics of Douban Conversation Corpus tion of information retrieval and employed mean average precision (MAP) (Baeza-Yates et al., 1999), mean reciprocal rank (MRR) (Voorhees et al., 1999), and precision at position 1 (P@1) as evaluation metrics. We did not calculate R2@1 because in Douban corpus one context could have more than one correct responses, and we have to randomly sample one for R2@1, which may bring bias to evaluation. When using the labeled set, we removed conversations with all negative responses or all positive responses, as models make no difference with them. There are 6, 670 contextresponse pairs left in the test set. 5.3 Baseline We considered the following baselines: Basic models: models in (Lowe et al., 2015) and (Kadlec et al., 2015) including TF-IDF, RNN, CNN, LSTM and BiLSTM. Multi-view: the model proposed by Zhou et al. (2016) that utilizes a hierarchical recurrent neural network to model utterance relationships. Deep learning to respond (DL2R): the model proposed by Yan et al. (2016) that reformulates the message with other utterances in the context. Advanced single-turn matching models: since BiLSTM does not represent the state-ofthe-art matching model, we concatenated the utterances in a context and matched the long text with a response candidate using more powerful models including MV-LSTM (Wan et al., 2016) (2D matching), Match-LSTM (Wang and Jiang, 2015), Attentive-LSTM (Tan et al., 2015) (two attention based models), and Multi-Channel which is described in Section 3.3. Multi-Channel is a simple version of our model without considering utterance relationships. We also appended the top 5 tf-idf words in context to the input message, and computed the score between the expanded message and a response with Multi-Channel, denoted as Multi-Channelexp. 501 Ubuntu Corpus Douban Conversation Corpus R2@1 R10@1 R10@2 R10@5 MAP MRR P@1 R10@1 R10@2 R10@5 TF-IDF 0.659 0.410 0.545 0.708 0.331 0.359 0.180 0.096 0.172 0.405 RNN 0.768 0.403 0.547 0.819 0.390 0.422 0.208 0.118 0.223 0.589 CNN 0.848 0.549 0.684 0.896 0.417 0.440 0.226 0.121 0.252 0.647 LSTM 0.901 0.638 0.784 0.949 0.485 0.527 0.320 0.187 0.343 0.720 BiLSTM 0.895 0.630 0.780 0.944 0.479 0.514 0.313 0.184 0.330 0.716 Multi-View 0.908 0.662 0.801 0.951 0.505 0.543 0.342 0.202 0.350 0.729 DL2R 0.899 0.626 0.783 0.944 0.488 0.527 0.330 0.193 0.342 0.705 MV-LSTM 0.906 0.653 0.804 0.946 0.498 0.538 0.348 0.202 0.351 0.710 Match-LSTM 0.904 0.653 0.799 0.944 0.500 0.537 0.345 0.202 0.348 0.720 Attentive-LSTM 0.903 0.633 0.789 0.943 0.495 0.523 0.331 0.192 0.328 0.718 Multi-Channel 0.904 0.656 0.809 0.942 0.506 0.543 0.349 0.203 0.351 0.709 Multi-Channelexp 0.714 0.368 0.497 0.745 0.476 0.515 0.317 0.179 0.335 0.691 SMNlast 0.923 0.723 0.842 0.956 0.526 0.571 0.393 0.236 0.387 0.729 SMNstatic 0.927 0.725 0.838 0.962 0.523 0.572 0.387 0.228 0.387 0.734 SMNdynamic 0.926 0.726 0.847 0.961 0.529 0.569 0.397 0.233 0.396 0.724 Table 3: Evaluation results on the two data sets. Numbers in bold mean that the improvement is statistically significant compared with the best baseline. 5.4 Parameter Tuning For baseline models, if their results are available in existing literature (e.g., those on the Ubuntu corpus), we just copied the numbers, otherwise we implemented the models following the settings in the literatures. All models were implemented using Theano (Theano Development Team, 2016). Word embeddings were initialized by the results of word2vec (Mikolov et al., 2013) which ran on the training data, and the dimensionality of word vectors is 200. For Multi-Channel and layer one of our model, we set the dimensionality of the hidden states of GRU as 200. We tuned the window size of convolution and pooling in {(2, 2), (3, 3)(4, 4)} and chose (3, 3) finally. The number of feature maps is 8. In layer two, we set the dimensionality of matching vectors and the hidden states of GRU as 50. The parameters were updated by stochastic gradient descent with Adam algorithm (Kingma and Ba, 2014) on a single Tesla K80 GPU. The initial learning rate is 0.001, and the parameters of Adam, β1 and β2 are 0.9 and 0.999 respectively. We employed early-stopping as a regularization strategy. Models were trained in minibatches with a batch size of 200, and the maximum utterance length is 50. We set the maximum context length (i.e., number of utterances) as 10, because the performance of models does not improve on contexts longer than 10 (details are shown in the Section 5.6). We padded zeros if the number of utterances in a context is less than 10, otherwise we kept the last 10 utterances. 5.5 Evaluation Results Table 3 shows the evaluation results on the two data sets. Our models outperform baselines greatly in terms of all metrics on both data sets, with the improvements being statistically significant (t-test with p-value ≤0.01, except R10@5 on Douban Corpus). Even the state-of-the-art singleturn matching models perform much worse than our models. The results demonstrate that one cannot neglect utterance relationships and simply perform multi-turn response selection by concatenating utterances together. Our models achieve significant improvements over Multi-View, which justified our “matching first” strategy. DL2R is worse than our models, indicating that utterance reformulation with heuristic rules is not a good method for utilizing context information. Rn@ks are low on the Douban Corpus as there are multiple correct candidates for a context (e.g., if there are 3 correct responses, then the maximum R10@1 is 0.33). SMNdynamic is only slightly better than SMNstatic and SMNlast. The reason might be that the GRU can select useful signals from the matching sequence and accumulate them in the final state with its gate mechanism, thus the efficacy of an attention mechanism is not obvious for the task at hand. 5.6 Further Analysis Visualization: we visualize the similarity matrices and the gates of GRU in layer two using an example from the Ubuntu corpus to further clarify how our model identifies important information in the context and how it selects important matching vectors with the gate mechanism of GRU as described in Section 3.3 and Section 3.4. The example is {u1: how can unzip many rar ( number for example ) files at once; u2: sure you can do that in bash; u3: okay how? u4: are the files all 502 Ubuntu Corpus Douban Conversation Corpus R2@1 R10@1 R10@2 R10@5 MAP MRR P@1 R10@1 R10@2 R10@5 ReplaceM 0.905 0.661 0.799 0.950 0.503 0.541 0.343 0.201 0.364 0.729 ReplaceA 0.918 0.716 0.832 0.954 0.522 0.565 0.376 0.220 0.385 0.727 Only M1 0.919 0.704 0.832 0.955 0.518 0.562 0.370 0.228 0.371 0.737 Only M2 0.921 0.715 0.836 0.956 0.521 0.565 0.382 0.232 0.380 0.734 SMNlast 0.923 0.723 0.842 0.956 0.526 0.571 0.393 0.236 0.387 0.729 Table 4: Evaluation results of model ablation. then the command glebihan should extract them all from/to that directory how can unzip many rar ( _number_ for example ) files at once 0.00 0.15 0.30 0.45 0.60 0.75 0.90 1.05 1.20 1.35 1.50 value (a) M1 of u1 and r then the command glebihan should extract them all from/to that directory okay how 0.00 0.15 0.30 0.45 0.60 0.75 0.90 1.05 1.20 1.35 1.50 value (b) M1 of u3 and r 0 10 20 30 40 u_1 u_2 u_3 u_4 u_5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 value (c) Update gate Figure 2: Model visualization. Darker areas mean larger value. in the same directory? u5: yes they all are; r: then the command glebihan should extract them all from/to that directory}. It is from the test set and our model successfully ranked the correct response to the top position. Due to space limitation, we only visualized M1, M2 and the update gate (i.e. z) in Figure 2. We can see that in u1 important words including “unzip”, “rar”, “files” are recognized and carried to matching by “command”, “extract”, and “directory” in r, while u3 is almost useless and thus little information is extracted from it. u1 is crucial to response selection and nearly all information from u1 and r flows to the hidden state of GRU, while other utterances are less informative and the corresponding gates are almost “closed” to keep the information from u1 and r until the final state. Model ablation: we investigate the effect of different parts of SMN by removing them one by one from SMNlast, shown in Table 4. First, replacing the multi-channel “2D” matching with a neural tensor network (NTN) (Socher et al., 2013) (denoted as ReplaceM) makes the performance drop dramatically. This is because NTN only matches a pair by an utterance vector and a response vector and loses important information in the pair. Together with the visualization, we can conclude that “2D” matching plays a key role in the “matching first” strategy as it captures the important matching information in each pair with minimal loss. Second, the performance drops slightly when replacing the GRU for matching accumulation with a multi-layer perceptron (denoted as ReplaceA). This indicates that utterance relationships are useful. Finally, we left only one channel in matching and found that M2 is a little more powerful than M1 and we achieve the best results with both of them (except on R10@5 on the Douban Corpus). Performance across context length: we study how our model (SMNlast) performs across the length of contexts. Figure 3 shows the comparison on MAP in different length intervals on the Douban corpus. Our model consistently performs better than the baselines, and when contexts become longer, the gap becomes larger. The results demonstrate that our model can well capture the dependencies, especially long dependencies, among utterances in contexts. (2,5] (5,10] (10,) context length 40 45 50 55 60 MAP LSTM MV-LSTM Multi-View SMN Figure 3: Comparison across context length Maximum context length: we investigate the influence of maximum context length for SMN. Figure 4 shows the performance of SMN on Ubuntu Corpus and Douban Corpus with respect to maximum context length. From Figure 4, we find that performance improves significantly when the maximum context length is lower than 5, and becomes stable after the context length reaches 10. This indicates that context information is important for multi-turn response selection, and we can set the maximum context length as 10 to balance effectiveness and efficiency. Error analysis: although SMN outperforms baseline methods on the two data sets, there are 503 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Maximum Context length 0.5 0.6 0.7 0.8 0.9 1.0 Score R_2@1 R_10@1 R_10@2 R_10@5 (a) Ubuntu Corpus 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Maximum context length 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Score MAP MRR P@1 (b) Douban Conversation Corpus Figure 4: Performance of SMN across maximum context length still several problems that cannot be handled perfectly. (1) Logical consistency. SMN models the context and response on the semantic level, but pays little attention to logical consistency. This leads to several DSATs in the Douban Corpus. For example, given a context {a: Does anyone know Newton jogging shoes? b: 100 RMB on Taobao. a: I know that. I do not want to buy it because that is a fake which is made in Qingdao ,b: Is it the only reason you do not want to buy it? }, SMN gives a large score to the response { It is not a fake. I just worry about the date of manufacture}. The response is inconsistent with the context on logic, as it claims that the jogging shoes are not fake. In the future, we shall explore the logic consistency problem in retrieval-based chatbots. (2) No correct candidates after retrieval. In the experiment, we prepared 1000 contexts for testing, but only 667 contexts have correct candidates after candidate response retrieval. This indicates that there is still room for candidate retrieval components to improve, and only expanding the input message with several keywords in context may not be a perfect approach for candidate retrieval. In the future, we will consider advanced methods for retrieving candidates. 6 Conclusion and Future Work We present a new context based model for multiturn response selection in retrieval-based chatbots. Experiment results on open data sets show that the model can significantly outperform the stateof-the-art methods. Besides, we publish the first human-labeled multi-turn response selection data set to research communities. In the future, we shall study how to model logical consistency of responses and improve candidate retrieval. 7 Acknowledgment We appreciate valuable comments provided by anonymous reviewers and our discussions with Zhao Yan. This work was supported by the National Natural Science Foundation of China (Grand Nos. 61672081, U1636211, 61370126), Beijing Advanced Innovation Center for Imaging Technology (No.BAICIT-2016001), National High Technology Research and Development Program of China (No.2015AA016004), and the Fund of the State Key Laboratory of Software Development Environment (No.SKLSDE-2015ZX-16). References Ricardo Baeza-Yates, Berthier Ribeiro-Neto, et al. 1999. Modern information retrieval, volume 463. ACM press New York. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin 76(5):378. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems. pages 2042–2050. Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988 . Rudolf Kadlec, Martin Schmid, and Jan Kleindienst. 2015. Improved deep learning baselines for ubuntu corpus dialogs. arXiv preprint arXiv:1510.03753 . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . 504 Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055 . Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155 . Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909 . Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 583–593. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015. Building end-to-end dialogue systems using generative hierarchical neural network models. arXiv preprint arXiv:1507.04808 . Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bowen Zhou, Yoshua Bengio, and Aaron Courville. 2016. Multiresolution recurrent neural networks: An application to dialogue response generation. arXiv preprint arXiv:1606.00776 . Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers. pages 1577–1586. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems. pages 926–934. Ming Tan, Bing Xiang, and Bowen Zhou. 2015. Lstmbased deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108 . Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688. http://arxiv.org/abs/1605.02688. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869 . Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec. volume 99, pages 77– 82. Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. 2016. Match-srnn: Modeling the recursive matching structure with spatial rnn. arXiv preprint arXiv:1604.04378 . Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conversations. In EMNLP. pages 935–945. Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2015. Syntax-based deep matching of short texts. arXiv preprint arXiv:1503.02427 . Shuohang Wang and Jing Jiang. 2015. Learning natural language inference with lstm. arXiv preprint arXiv:1512.08849 . Bowen Wu, Baoxun Wang, and Hui Xue. 2016a. Ranking responses oriented to conversational relevance in chat-bots. COLING16 . Yu Wu, Wei Wu, Zhoujun Li, and Ming Zhou. 2016b. Topic augmented neural network for short text conversation. CoRR abs/1605.00090. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2016. Topic augmented neural response generation with a joint attention mechanism. arXiv preprint arXiv:1606.08340 . Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2016. Incorporating loose-structured knowledge into lstm with recall gate for conversation modeling. arXiv preprint arXiv:1605.05110 . Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrievalbased human-computer conversation system. In SIGIR 2016, Pisa, Italy, July 17-21, 2016. pages 55– 64. https://doi.org/10.1145/2911451.2911542. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Steve Young, Milica Gaˇsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Computer Speech & Language 24(2):150–174. Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, R Yan, D Yu, Xuan Liu, and H Tian. 2016. Multiview response selection for human-computer conversation. EMNLP16 . 505
2017
46
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 506–517 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1047 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 506–517 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1047 Learning word-like units from joint audio-visual analysis David Harwath and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139, USA {dharwath,glass}@mit.edu Abstract Given a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the words “lighthouse” within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images. 1 Introduction 1.1 Problem Statement and Motivation Automatically discovering words and other elements of linguistic structure from continuous speech has been a longstanding goal in computational linguists, cognitive science, and other speech processing fields. Practically all humans acquire language at a very early age, but this task has proven to be an incredibly difficult problem for computers. While conventional automatic speech recognition (ASR) systems have a long history and have recently made great strides thanks to the revival of deep neural networks (DNNs), their reliance on highly supervised training paradigms has essentially restricted their application to the major languages of the world, accounting for a small fraction of the more than 7,000 human languages spoken worldwide (Lewis et al., 2016). The main reason for this limitation is the fact that these supervised approaches require enormous amounts of very expensive human transcripts. Moreover, the use of the written word is a convenient but limiting convention, since there are many oral languages which do not even employ a writing system. In constrast, infants learn to communicate verbally before they are capable of reading and writing - so there is no inherent reason why spoken language systems need to be inseparably tied to text. The key contribution of this paper has two facets. First, we introduce a methodology capable of not only discovering word-like units from continuous speech at the waveform level with no additional text transcriptions or conventional speech recognition apparatus. Instead, we jointly learn the semantics of those units via visual associations. Although we evaluate our algorithm on an English corpus, it could conceivably run on any language without requiring any text or associated ASR capability. Second, from a computational perspective, our method of speech pattern discovery runs in linear time. Previous work has presented algorithms for performing acoustic pattern discovery in continuous speech (Park and Glass, 2008; Jansen et al., 2010; Jansen and Van Durme, 2011) without the use of transcriptions or another modality, but those algorithms are limited in their ability to scale by their inherent O(n2) complexity, since they do an exhaustive comparison of the data against itself. Our method leverages correlated information from a second modality - the visual domain - to guide the discovery of words and phrases. This enables our method to run in O(n) time, and we demonstrate it scalability by discovering acoustic patterns in over 522 hours of audio. 1.2 Previous Work A sub-field within speech processing that has garnered much attention recently is unsupervised 506 speech pattern discovery. Segmental Dynamic Time Warping (S-DTW) was introduced by Park and Glass (2008), which discovers repetitions of the same words and phrases in a collection of untranscribed acoustic data. Many subsequent efforts extended these ideas (Jansen et al., 2010; Jansen and Van Durme, 2011; Dredze et al., 2010; Harwath et al., 2012; Zhang and Glass, 2009). Alternative approaches based on Bayesian nonparametric modeling (Lee and Glass, 2012; Ondel et al., 2016) employed a generative model to cluster acoustic segments into phoneme-like categories, and related works aimed to segment and cluster either reference or learned phonemelike tokens into higher-level units (Johnson, 2008; Goldwater et al., 2009; Lee et al., 2015). While supervised object detection is a standard problem in the vision community, several recent works have tackled the problem of weaklysupervised or unsupervised object localization (Bergamo et al., 2014; Cho et al., 2015; Zhou et al., 2015; Cinbis et al., 2016). Although the focus of this work is discovering acoustic patterns, in the process we jointly associate the acoustic patterns with clusters of image crops, which we demonstrate capture visual patterns as well. The computer vision and NLP communities have begun to leverage deep learning to create multimodal models of images and text. Many works have focused on generating annotations or text captions for images (Socher and Li, 2010; Frome et al., 2013; Socher et al., 2014; Karpathy et al., 2014; Karpathy and Li, 2015; Vinyals et al., 2015; Fang et al., 2015; Johnson et al., 2016). One interesting intersection between word induction from phoneme strings and multimodal modeling of images and text is that of Gelderloos and Chrupaa (2016), who uses images to segment words within captions at the phoneme string level. Other work has taken these ideas beyond text, and attempted to relate images to spoken audio captions directly at the waveform level (Roy, 2003; Harwath and Glass, 2015; Harwath et al., 2016). The work of (Harwath et al., 2016) is the most similar to ours, in which the authors learned embeddings at the entire image and entire spoken caption level and then used the embeddings to perform bidirectional retrieval. In this work, we go further by automatically segmenting and clustering the spoken captions into individual word-like units, as well as the images into object-like categories. 2 Experimental Data We employ a corpus of over 200,000 spoken captions for images taken from the Places205 dataset (Zhou et al., 2014), corresponding to over 522 hours of speech data. The captions were collected using Amazon’s Mechanical Turk service, in which workers were shown images and asked to describe them verbally in a free-form manner. The data collection scheme is described in detail in Harwath et al. (2016), but the experiments in this paper leverage nearly twice the amount of data. For training our multimodal neural network as well as the pattern discovery experiments, we use a subset of 214,585 image/caption pairs, and we hold out a set of 1,000 pairs for evaluating the multimodal network’s retrieval ability. Because we lack ground truth text transcripts for the data, we used Google’s Speech Recognition public API to generate proxy transcripts which we use when analyzing our system. Note that the ASR was only used for analysis of the results, and was not involved in any of the learning. 3 Audio-Visual Embedding Neural Networks We first train a deep multimodal embedding network similar in spirit to the one described in Harwath et al. (2016), but with a more sophisticated architecture. The model is trained to map entire image frames and entire spoken captions into a shared embedding space; however, as we will show, the trained network can then be used to localize patterns corresponding to words and phrases within the spectrogram, as well as visual objects within the image by applying it to small sub-regions of the image and spectrogram. The model is comprised of two branches, one which takes as input images, and the other which takes as input spectrograms. The image network is formed by taking the off-the-shelf VGG 16 layer network (Simonyan and Zisserman, 2014) and replacing the softmax classification layer with a linear transform which maps the 4096-dimensional activations of the second fully connected layer into our 1024-dimensional multimodal embedding space. In our experiments, the weights of this projection layer are trained, but the layers taken from the VGG network below it are kept fixed. The second branch of our network analyzes speech spectrograms as if they were black and white images. Our spectrograms are computed using 40 log Mel 507 filterbanks with a 25ms Hamming window and a 10ms shift. The input to this branch always has 1 color channel and is always 40 pixels high (corresponding to the 40 Mel filterbanks), but the width of the spectrogram varies depending upon the duration of the spoken caption, with each pixel corresponding to approximately 10 milliseconds worth of audio. The architecture we use is entirely convolutional and shown below, where C denotes the number of convolutional channels, W is filter width, H is filter height, and S is pooling stride. 1. Convolution: C=128, W=1, H=40, ReLU 2. Convolution: C=256, W=11, H=1, ReLU 3. Maxpool: W=3, H=1, S=2 4. Convolution: C=512, W=17, H=1, ReLU 5. Maxpool: W=3, H=1, S=2 6. Convolution: C=512, W=17, H=1, ReLU 7. Maxpool: W=3, H=1, S=2 8. Convolution: C=1024, W=17, H=1, ReLU 9. Meanpool over entire caption 10. L2 normalization In practice during training, we restrict the caption spectrograms to all be 1024 frames wide (i.e., 10sec of speech) by applying truncation or zero padding. Additionally, both the images and spectrograms are mean normalized before training. The overall multimodal network is formed by tying together the image and audio branches with a layer which takes both of their output vectors and computes an inner product between them, representing the similarity score between a given image/caption pair. We train the network to assign high scores to matching image/caption pairs, and lower scores to mismatched pairs. Within a minibatch of B image/caption pairs, let Sp j , j = 1, . . . , B denote the similarity score of the jth image/caption pair as output by the neural network. Next, for each pair we randomly sample one impostor caption and one impostor image from the same minibatch. Let Si j denote the similarity score between the jth caption and its impostor image, and Sc j be the similarity score between the jth image and its impostor caption. The total loss for the entire minibatch is then computed as L(θ) = B X j=1 [max(0, Sc j −Sp j + 1) + max(0, Si j −Sp j + 1)] (1) We train the neural network with 50 epochs of stochastic gradient descent using a batch size B = 128, a momentum of 0.9, and a learning rate of 1e5 which is set to geometrically decay by a factor between 2 and 5 every 5 to 10 epochs. 4 Finding and Clustering Audio-Visual Caption Groundings Although we have trained our multimodal network to compute embeddings at the granularity of entire images and entire caption spectrograms, we can easily apply it in a more localized fashion. In the case of images, we can simply take any arbitrary crop of an original image and resize it to 224x224 pixels. The audio network is even more trivial to apply locally, because it is entirely convolutional and the final mean pooling layer ensures that the output will be a 1024-dim vector no matter the extent of the input. The bigger question is where to locally apply the networks in order to discover meaningful acoustic and visual patterns. Given an image and its corresponding spoken audio caption, we use the term grounding to refer to extracting meaningful segments from the caption and associating them with an appropriate subregion of the image. For example, if an image depicted a person eating ice cream and its caption contained the spoken words “A person is enjoying some ice cream,” an ideal set of groundings would entail the acoustic segment containing the word “person” linked to a bounding box around the person, and the segment containing the word “ice cream” linked to a box around the ice cream. We use a constrained brute force ranking scheme to evaluate all possible groundings (with a restricted granularity) between an image and its caption. Specifically, we divide the image into a grid, and extract all of the image crops whose boundaries sit on the grid lines. Because we are mainly interested in extracting regions of interest and not high precision object detection boxes, to keep the number of proposal regions under control we impose several restrictions. First, we use a 10x10 grid on each image regardless of its original size. Second, we define minimum and maximum aspect ratios as 2:3 and 3:2 so as not to introduce too much distortion and also to reduce the number of proposal boxes. Third, we define a minimum bounding width as 30% of the original image width, and similarly a minimum height as 30% of the original image height. In practice, this results in a few thousand proposal regions per image. To extract proposal segments from the audio 508 caption spectrogram, we similarly define a 1-dim grid along the time axis, and consider all possible start/end points at 10 frame (pixel) intervals. We impose minimum and maximum segment length constraints at 50 and 100 frames (pixels), implying that our discovered acoustic patterns are restricted to fall between 0.5 and 1 second in duration. The number of proposal segments will vary depending on the caption length, and typically number in the several thousands. Note that when learning groundings we consider the entire audio sequence, and do not incorporate the 10sec duration constraint imposed during training. Once we have extracted a set of proposed visual bounding boxes and acoustic segments for a given image/caption pair, we use our multimodal network to compute a similarity score between each unique image crop/acoustic segment pair. Each triplet of an image crop, acoustic segment, and similarity score constitutes a proposed grounding. A naive approach would be to simply keep the top N groundings from this list, but in practice we ran into two problems with this strategy. First, many proposed acoustic segments capture mostly silence due to pauses present in natural speech. We solve this issue by using a simple voice activity detector (VAD) which was trained on the TIMIT corpus(Garofolo et al., 1993). If the VAD estimates that 40% or more of any proposed acoustic segment is silence, we discard that entire grounding. The second problem we ran into is the fact that the top of the sorted grounding list is dominated by highly overlapping acoustic segments. This makes sense, because highly informative content words will show up in many different groundings with slightly perturbed start or end times. To alleviate this issue, when evaluating a grounding from the top of the proposal list we compare the interval intersection over union (IOU) of its acoustic segment against all acoustic segments already accepted for further consideration. If the IOU exceeds a threshold of 0.1, we discard the new grounding and continue moving down the list. We stop accumulating groundings once the scores fall to below 50% of the top score in the “keep” list, or when 10 groundings have been added to the “keep” list. Figure 1 displays a pictorial example of our grounding procedure. Once we have completed the grounding procedure, we are left with a small set of regions of interest in each image and caption spectrogram. We use the respective branches of our multimodal network to compute embedding vectors for each grounding’s image crop and acoustic segment. We then employ k-means clustering separately on the collection of image embedding vectors as well as the collection of acoustic embedding vectors. The last step is to establish an affinity score between each image cluster I and each acoustic cluster A; we do so using the equation Affinity(I, A) = X i∈I X a∈A i⊤a · Pair(i, a) (2) where i is an image crop embedding vector, a is an acoustic segment embedding vector, and Pair(i, a) is equal to 1 when i and a belong to the same grounding pair, and 0 otherwise. After clustering, we are left with a set of acoustic pattern clusters, a set of visual pattern clusters, and a set of linkages describing which acoustic clusters are associated with which image clusters. In the next section, we investigate these clusters in more detail. 5 Experiments and Analysis Table 1: Results for image search and annotation on the Places audio caption data (214k training pairs, 1k testing pairs). Recall is shown for the top 1, 5, and 10 hits. The model we use in this paper is compared against the meanpool variant of the model architecture presented in Harwath et al. (2016). For both training and testing, the captions were truncated/zero-padded to 10 seconds. Search Model R@1 R@5 R@10 (Harwath et al., 2016) 0.090 0.261 0.372 This work (audio) 0.112 0.312 0.431 This work (text) 0.111 0.383 0.525 Annotation Model R@1 R@5 R@10 (Harwath et al., 2016) 0.098 0.266 0.352 This work (audio) 0.120 0.307 0.438 This work (text) 0.113 0.341 0.493 We trained our multimodal network on a set of 214,585 image/caption pairs, and vetted it with an image search (given caption, find image) and annotation (given image, find caption) task similar to the one used in Harwath et al. (2016); Karpathy et al. (2014); Karpathy and Li (2015). The image annotation and search recall scores on a 1,000 image/caption pair held-out test set are shown in Table 1. Also shown in this table are the scores 509 Figure 1: An example of our grounding method. The left image displays a grid defining the allowed start and end coordinates for the bounding box proposals. The bottom spectrogram displays several audio region proposals drawn as the families of stacked red line segments. The image on the right and spectrogram on the top display the final output of the grounding algorithm. The top spectrogram also displays the time-aligned text transcript of the caption, so as to demonstrate which words were captured by the groundings. In this example, the top 3 groundings have been kept, with the colors indicating the audio segment which is grounded to each bounding box. Word Count Word Count ocean 2150 castle 766 (silence) 127 (silence) 70 the ocean 72 capital 39 blue ocean 29 large castle 24 body ocean 22 castles 23 oceans 16 (noise) 21 ocean water 16 council 13 (noise) 15 stone castle 12 of ocean 14 capitol 10 oceanside 14 old castle 10 Table 2: Examples of the breakdown of word/phrase identities of several acoustic clusters achieved by a model which uses the ASR text transcriptions for each caption instead of the speech audio. The text captions were truncated/padded to 20 words, and the audio branch of the network was replaced with a branch with the following architecture: 1. Word embedding layer of dimension 200 2. Temporal Convolution: C=512, W=3, ReLU 3. Temporal Convolution: C=1024, W=3 4. Meanpool over entire caption 5. L2 normalization One would expect that access to ASR hypotheses should improve the recall scores, but the performance gap is not enormous. Access to the ASR hypotheses provides a relative improvement of approximately 21.8% for image search R@10 and 12.5% for annotation R@10 compared to using no transcriptions or ASR whatsoever. We performed the grounding and pattern clustering steps on the entire training dataset, which resulted in a total of 1,161,305 unique grounding pairs. For evaluation, we wish to assign a label to each cluster and cluster member, but this is not completely straightforward since each acoustic segment may capture part of a word, a whole word, multiple words, etc. Our strategy is to forcealign the Google recognition hypothesis text to the audio, and then assign a label string to each acoustic segment based upon which words it overlaps in time. The alignments are created with the help of a Kaldi (Povey et al., 2011) speech recognizer 510 Table 3: Top 50 clusters with k = 500 sorted by increasing variance. Legend: |Cc| is acoustic cluster size, |Ci| is associated image cluster size, Pur. is acoustic cluster purity, σ2 is acoustic cluster variance, and Cov. is acoustic cluster coverage. A dash (-) indicates a cluster whose majority label is silence. Trans |Cc| |Ci| Pur. σ2 Cov. Trans |Cc| |Ci| Pur. σ2 Cov. 1059 3480 0.70 0.26 snow 4331 3480 0.85 0.26 0.45 desert 1936 2896 0.82 0.27 0.67 kitchen 3200 2990 0.88 0.28 0.76 restaurant 1921 2536 0.89 0.29 0.71 mountain 4571 2768 0.86 0.30 0.38 black 4369 2387 0.64 0.30 0.17 skyscraper 843 3205 0.84 0.30 0.84 bridge 1654 2025 0.84 0.30 0.25 tree 5303 3758 0.90 0.30 0.16 castle 1298 2887 0.72 0.31 0.74 bridge 2779 2025 0.81 0.32 0.41 2349 2165 0.31 0.33 ocean 2913 3505 0.87 0.33 0.71 table 3765 2165 0.94 0.33 0.23 windmill 1458 3752 0.71 0.33 0.76 window 1890 2795 0.85 0.34 0.21 river 2643 3204 0.76 0.35 0.62 water 5868 3204 0.90 0.35 0.27 beach 1897 2964 0.79 0.35 0.64 flower 3906 2587 0.92 0.35 0.67 wall 3158 3636 0.84 0.35 0.23 sky 4306 6055 0.76 0.36 0.34 street 2602 2385 0.86 0.36 0.49 golf course 1678 3864 0.44 0.36 0.63 field 3896 3261 0.74 0.36 0.37 tree 4098 3758 0.89 0.36 0.13 lighthouse 1254 1518 0.61 0.36 0.83 forest 1752 3431 0.80 0.37 0.56 church 2503 3140 0.86 0.37 0.72 people 3624 2275 0.91 0.37 0.14 baseball 2777 1929 0.66 0.37 0.86 field 2603 3922 0.74 0.37 0.25 car 3442 2118 0.79 0.38 0.27 people 4074 2286 0.92 0.38 0.17 shower 1271 2206 0.74 0.38 0.82 people walking 918 2224 0.63 0.38 0.25 wooden 3095 2723 0.63 0.38 0.28 mountain 3464 3239 0.88 0.38 0.29 tree 3676 2393 0.89 0.39 0.11 1976 3158 0.28 0.39 snow 2521 3480 0.79 0.39 0.24 water 3102 2948 0.90 0.39 0.14 rock 2897 2967 0.76 0.39 0.26 2918 3459 0.08 0.39 night 3027 3185 0.44 0.39 0.59 station 2063 2083 0.85 0.39 0.62 chair 2589 2288 0.89 0.39 0.22 building 6791 3450 0.89 0.40 0.21 city 2951 3190 0.67 0.40 0.50 Figure 2: Scatter plot of audio cluster purity weighted by log cluster size vs variance for k = 500 (least-squares line superimposed). based on the standard WSJ recipe and trained using the Google ASR hypothesis as a proxy for the transcriptions. Any word whose duration is overlapped 30% or more by the acoustic segment is included in the label string for the segment. We then employ a majority vote scheme to derive the overall cluster labels. When computing the purity of a cluster, we count a cluster member as matching the cluster label as long as the overall cluster label appears in the member’s label string. In other words, an acoustic segment overlapping the words “the lighthouse” would receive credit for matching the overall cluster label “lighthouse”. A breakdown of the segments captured by two clusters is shown in Table 2. We investigated some simple schemes for predicting highly pure clusters, and found that the empirical variance of the cluster members (average squared distance to the cluster centroid) was a good indicator. Figure 2 displays a scatter plot of cluster purity weighted by the natural log of the cluster size against the empirical variance. Large, pure clusters are easily predicted by their low empirical variance, while a high variance is indicative of a garbage cluster. Ranking a set of k = 500 acoustic clusters by their variance, Table 3 displays some statistics for the 50 lowest-variance clusters. We see that most of the clusters are very large and highly pure, and their labels reflect interesting object categories being identified by the neural network. We additionally compute the coverage of each cluster by counting the total number of instances of the clus511 sky grass sunset ocean river castle couch wooden lighthouse train Figure 3: The 9 most central image crops from several image clusters, along with the majority-vote label of their most associated acoustic pattern cluster Table 4: Clustering statistics of the acoustic clusters for various values of k and different settings of the variance-based cluster pruning threshold. Legend: |C| = number of clusters remaining after pruning, |X| = number of datapoints after pruning, Pur = purity, |L| = number of unique cluster labels, AC = average cluster coverage σ2 < 0.9 σ2 < 0.65 k |C| |X| Pur |L| AC |C| |X| Pur |L| AC 250 249 1081514 .364 149 .423 128 548866 .575 108 .463 500 499 1097225 .396 242 .332 278 623159 .591 196 .375 750 749 1101151 .409 308 .406 434 668771 .585 255 .450 1000 999 1103391 .411 373 .336 622 710081 .568 318 .382 1500 1496 1104631 .429 464 .316 971 750162 .566 413 .366 2000 1992 1106418 .431 540 .237 1354 790492 .546 484 .271 ter label anywhere in the training data, and then compute what fraction of those instances were captured by the cluster. There are many examples of high coverage clusters, e.g. the “skyscraper” cluster captures 84% of all occurrences of the word “skyscraper”, while the “baseball” cluster captures 86% of all occurrences of the word “baseball”. This is quite impressive given the fact that no conventional speech recognition was employed, and neither the multimodal neural network nor the grounding algorithm had access to the text transcripts of the captions. To get an idea of the impact of the k parameter as well as a variance-based cluster pruning threshold based on Figure 2, we swept k from 250 to 2000 and computed a set of statistics shown in Table 4. We compute the standard overall cluster purity evaluation metric in addition to the average coverage across clusters. The table shows the natural tradeoff between cluster purity and redundancy (indicated by the average cluster coverage) as k is increased. In all cases, the variance-based cluster pruning greatly increases both the overall purity and average cluster coverage metrics. We also notice that more unique cluster labels are discovered with a larger k. Next, we examine the image clusters. Figure 3 displays the 9 most central image crops for a set of 10 different image clusters, along with the majority-vote label of each image cluster’s associated audio cluster. In all cases, we see that the image crops are highly relevant to their audio cluster label. We include many more example image clusters in Appendix A. In order to examine the semantic embedding space in more depth, we took the top 150 clusters from the same k = 500 clustering run described in Table 3 and performed t-SNE (van der Maaten and Hinton, 2008) analysis on the cluster centroid vectors. We projected each centroid down to 2 di512 Figure 4: t-SNE analysis of the 150 lowest-variance audio pattern cluster centroids for k = 500. Displayed is the majority-vote transcription of the each audio cluster. All clusters shown contained a minimum of 583 members and an average of 2482, with an average purity of .668. mensions and plotted their majority-vote labels in Figure 4. Immediately we see that different clusters which capture the same label closely neighbor one another, indicating that distances in the embedding space do indeed carry information discriminative across word types (and suggesting that a more sophisticated clustering algorithm than kmeans would perform better). More interestingly, we see that semantic information is also reflected in these distances. The cluster centroids for “lake,” “river,” “body,” “water,” “waterfall,” “pond,” and “pool” all form a tight meta-cluster, as do “restaurant,” “store,” “shop,” and “shelves,” as well as “children,” “girl,” “woman,” and “man.” Many other semantic meta-clusters can be seen in Figure 4, suggesting that the embedding space is capturing information that is highly discriminative both acoustically and semantically. Because our experiments revolve around the discovery of word and object categories, a key question to address is the extent to which the supervision used to train the VGG network constrains or influences the kinds of objects learned. Because the 1,000 object classes from the ILSVRC2012 task (Russakovsky et al., 2015) used to train the VGG network were derived from WordNet synsets (Fellbaum, 1998), we can measure the semantic similarity between the words learned by our network and the ILSVRC2012 class labels by using synset similarity measures within WordNet. We do this by first building a list of the 1,000 WordNet synsets associated with the ILSVRC2012 classes. We then take the set of unique majority-vote labels associated with the discovered word clusters for k = 500, filtered by setting a threshold on their variance (σ2 ≤0.65) so as to get rid of garbage clusters, leaving us with 197 unique acoustic cluster labels. We then look up each cluster label in WordNet, and compare all noun senses of the label to every ILSVRC2012 class synset according to the path similarity measure. This measure describes the distance between two synsets in a hyponym/hypernym hierarchy, where a score of 1 represents identity and lower scores indicate less similarity. We retain the highest score between any sense of the cluster label and any ILSVRC2012 synset. Of the 197 unique cluster labels, only 16 had a distance of 1 from any ILSVRC12 class, which would indicate an exact match. A path similarity of 0.5 indicates one degree of separation in the hyponym/hypernym hierarchy - for example, the similarity between “desk” and “table” is 0.5. 47 cluster labels were found to have a similarity of 0.5 to some ILSVRC12 class, leaving 134 cluster labels whose highest similarity to any ILSVRC12 class was less than 0.5. In 513 other words, more than two thirds of the highly pure pattern clusters learned by our network were dissimilar to all of the 1,000 ILSVRC12 classes used to pretrain the VGG network, indicating that our model is able to generalize far beyond the set of classes found in the ILSVRC12 data. We display the labels of the 40 lowest variance acoustic clusters labels along with the name and similarity score of their closest ILSVRC12 synset in Table 5. Cluster ILSVRC synset Similarity snow cliff.n.01 0.14 desert cliff.n.01 0.12 kitchen patio.n.01 0.25 restaurant restaurant.n.01 1.00 mountain alp.n.01 0.50 black pool table.n.01 0.25 skyscraper greenhouse.n.01 0.33 bridge steel arch bridge.n.01 0.50 tree daisy.n.01 0.14 castle castle.n.02 1.00 ocean cliff.n.01 0.14 table desk.n.01 0.50 windmill cash machine.n.01 0.20 window screen.n.03 0.33 river cliff.n.01 0.12 water menu.n.02 0.25 beach cliff.n.01 0.33 flower daisy.n.01 0.50 wall cliff.n.01 0.33 sky cliff.n.01 0.11 street swing.n.02 0.14 golf course swing.n.02 0.17 field cliff.n.01 0.20 lighthouse beacon.n.03 1.00 forest cliff.n.01 0.20 church church.n.02 1.00 people street sign.n.01 0.17 baseball baseball.n.02 1.00 car freight car.n.01 0.50 shower swing.n.02 0.17 people walking (none) 0.00 wooden (none) 0.00 rock toilet tissue.n.01 0.20 night street sign.n.01 0.14 station swing.n.02 0.20 chair barber chair.n.01 0.50 building greenhouse.n.01 0.50 city cliff.n.01 0.12 white jean.n.01 0.33 sunset street sign.n.01 0.11 Table 5: The 40 lowest variance, uniquely-labeled acoustic clusters paired with their most similar ILSVRC2012 synset. 6 Conclusions and Future Work In this paper, we have demonstrated that a neural network trained to associate images with the waveforms representing their spoken audio captions can successfully be applied to discover and cluster acoustic patterns representing words or short phrases in untranscribed audio data. An analogous procedure can be applied to visual images to discover visual patterns, and then the two modalities can be linked, allowing the network to learn, for example, that spoken instances of the word “train” are associated with image regions containing trains. This is done without the use of a conventional automatic speech recognition system and zero text transcriptions, and therefore is completely agnostic to the language in which the captions are spoken. Further, this is done in O(n) time with respect to the number of image/caption pairs, whereas previous stateof-the-art acoustic pattern discovery algorithms which leveraged acoustic data alone run in O(n2) time. We demonstrate the success of our methodology on a large-scale dataset of over 214,000 image/caption pairs comprising over 522 hours of spoken audio data, which is to our knowledge the largest scale acoustic pattern discovery experiment ever performed. We have shown that the shared multimodal embedding space learned by our model is discriminative not only across visual object categories, but also acoustically and semantically across spoken words. The future directions in which this research could be taken are incredibly fertile. Because our method creates a segmentation as well as an alignment between images and their spoken captions, a generative model could be trained using these alignments. The model could provide a spoken caption for an arbitrary image, or even synthesize an image given a spoken description. Modeling improvements are also possible, aimed at the goal of incorporating both visual and acoustic localization into the neural network itself. The same framework we use here could be extended to video, enabling the learning of actions, verbs, environmental sounds, and the like. Additionally, by collecting a second dataset of captions for our images in a different language, such as Spanish, our model could be extended to learn the acoustic correspondences for a given object category in both languages. This paves the way for creating a speech-to-speech translation model not only with absolutely zero need for any sort of text transcriptions, but also with zero need for directly parallel linguistic data or manual human translations. 514 References Alessandro Bergamo, Loris Bazzani, Dragomir Anguelov, and Lorenzo Torresani. 2014. Self-taught object localization with deep networks. CoRR abs/1409.3964. http://arxiv.org/abs/1409.3964. Minsu Cho, Suha Kwak, Cordelia Schmid, and Jean Ponce. 2015. Unsupervised object discovery and localization in the wild: Part-based matching with bottom-up region proposals. In Proceedings of CVPR. Ramazan Cinbis, Jakob Verbeek, and Cordelia Schmid. 2016. Weakly supervised object localization with multi-fold multiple instance learning. In IEEE Transactions on Pattern Analysis and Machine Intelligence. Mark Dredze, Aren Jansen, Glen Coppersmith, and Kenneth Church. 2010. NLP on spoken documents without ASR. In Proceedings of EMNLP. Hao Fang, Saurabh Gupta, Forrest Iandola, Srivastava Rupesh, Li Deng, Piotr Dollar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, Platt John C., C. Lawrence Zitnick, and Geoffrey Zweig. 2015. From captions to visual concepts and back. In Proceedings of CVPR. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books. Andrea Frome, Greg S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. 2013. Devise: A deep visual-semantic embedding model. In Proceedings of the Neural Information Processing Society. John Garofolo, Lori Lamel, William Fisher, Jonathan Fiscus, David Pallet, Nancy Dahlgren, and Victor Zue. 1993. The TIMIT acoustic-phonetic continuous speech corpus. Lieke Gelderloos and Grzegorz Chrupaa. 2016. From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning. In arXiv:1610.03342. Sharon Goldwater, Thomas Griffiths, and Mark Johnson. 2009. A Bayesian framework for word segmentation: exploring the effects of context. In Cognition, vol. 112 pp.21-54. David Harwath and James Glass. 2015. Deep multimodal semantic embeddings for speech and images. In Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding. David Harwath, Timothy J. Hazen, and James Glass. 2012. Zero resource spoken audio corpus analysis. In Proceedings of ICASSP. David Harwath, Antonio Torralba, and James R. Glass. 2016. Unsupervised learning of spoken language with visual context. In Proceedings of NIPS. Aren Jansen, Kenneth Church, and Hynek Hermansky. 2010. Toward spoken term discovery at scale with zero resources. In Proceedings of Interspeech. Aren Jansen and Benjamin Van Durme. 2011. Efficient spoken term discovery using randomized algorithms. In Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding. Justin Johnson, Andrej Karpathy, and Li Fei-Fei. 2016. Densecap: Fully convolutional localization networks for dense captioning. In Proceedings of CVPR. Mark Johnson. 2008. Unsupervised word segmentation for sesotho using adaptor grammars. In Proceedings of ACL SIG on Computational Morphology and Phonology. Andrej Karpathy, Armand Joulin, and Fei-Fei Li. 2014. Deep fragment embeddings for bidirectional image sentence mapping. In Proceedings of the Neural Information Processing Society. Andrej Karpathy and Fei-Fei Li. 2015. Deep visual-semantic alignments for generating image descriptions. In Proceedings of CVPR. Chia-Ying Lee and James Glass. 2012. A nonparametric Bayesian approach to acoustic model discovery. In Proceedings of the 2012 meeting of the Association for Computational Linguistics. Chia-Ying Lee, Timothy J. O’Donnell, and James Glass. 2015. Unsupervised lexicon discovery from acoustic input. In Transactions of the Association for Computational Linguistics. M. Paul Lewis, Gary F. Simon, and Charles D. Fennig. 2016. Ethnologue: Languages of the World, Nineteenth edition. SIL International. Online version: http://www.ethnologue.com. Lucas Ondel, Lukas Burget, and Jan Cernocky. 2016. Variational inference for acoustic unit discovery. In 5th Workshop on Spoken Language Technology for Underresourced Language. Alex Park and James Glass. 2008. Unsupervised pattern discovery in speech. In IEEE Transactions on Audio, Speech, and Language Processing vol. 16, no.1, pp. 186-197. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. Deb Roy. 2003. Grounded spoken language acquisition: Experiments in word learning. In IEEE Transactions on Multimedia. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115(3):211–252. https://doi.org/10.1007/s11263-015-0816-y. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556. Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, and Andrew Y. Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. In Transactions of the Association for Computational Linguistics. 515 Richard Socher and Fei-Fei Li. 2010. Connecting modalities: Semi-supervised segmentation and annotation of images using unaligned text corpora. In Proceedings of CVPR. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing high-dimensional data using t-sne. In Journal of Machine Learning Research. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dimitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of CVPR. Yaodong Zhang and James Glass. 2009. Unsupervised spoken keyword spotting via segmental DTW on Gaussian posteriorgrams. In Proceedings ASRU. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2015. Object detectors emerge in deep scene CNNs. In Proceedings of ICLR. Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. 2014. Learning deep features for scene recognition using places database. In Proceedings of the Neural Information Processing Society. 516 A Additional Cluster Visualizations beach cliff pool desert field chair table staircase statue stone church forest mountain skyscraper trees waterfall windmills window city bridge flowers man wall archway baseball boat shelves cockpit girl children building rock kitchen plant hallway 517
2017
47
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 518–529 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1048 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 518–529 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1048 Joint CTC/attention decoding for end-to-end speech recognition Takaaki Hori, Shinji Watanabe, John R. Hershey Mitsubishi Electric Research Laboratories (MERL) {thori,watanabe,hershey}@merl.com Abstract End-to-end automatic speech recognition (ASR) has become a popular alternative to conventional DNN/HMM systems because it avoids the need for linguistic resources such as pronunciation dictionary, tokenization, and contextdependency trees, leading to a greatly simplified model-building process. There are two major types of end-to-end architectures for ASR: attention-based methods use an attention mechanism to perform alignment between acoustic frames and recognized symbols, and connectionist temporal classification (CTC), uses Markov assumptions to efficiently solve sequential problems by dynamic programming. This paper proposes a joint decoding algorithm for end-to-end ASR with a hybrid CTC/attention architecture, which effectively utilizes both advantages in decoding. We have applied the proposed method to two ASR benchmarks (spontaneous Japanese and Mandarin Chinese), and showing the comparable performance to conventional state-of-the-art DNN/HMM ASR systems without linguistic resources. 1 Introduction Automatic speech recognition (ASR) is currently a mature set of technologies that have been widely deployed, resulting in great success in interface applications such as voice search. A typical ASR system is factorized into several modules including acoustic, lexicon, and language models based on a probabilistic noisy channel model (Jelinek, 1976). Over the last decade, dramatic improvements in acoustic and language models have been driven by machine learning techniques known as deep learning (Hinton et al., 2012). However, current systems lean heavily on the scaffolding of complicated legacy architectures that grew up around traditional techniques. For example, when we build an acoustic model from scratch, we have to first build hidden Markov model (HMM) and Gaussian mixture model (GMM) followed by deep neural networks (DNN). In addition, the factorization of acoustic, lexicon, and language models is derived by conditional independence assumptions (especially Markov assumptions), although the data do not necessarily follow such assumptions leading to model misspecification. This factorization form also yields a local optimum since the above modules are optimized separately. Further, to well factorize acoustic and language models, the system requires linguistic knowledge based on a lexicon model, which is usually based on a hand-crafted pronunciation dictionary to map word to phoneme sequence. In addition to the pronunciation dictionary issue, some languages, which do not explicitly have a word boundary, need languagespecific tokenization modules (Kudo et al., 2004; Bird, 2006) for language modeling. Finally, inference/decoding has to be performed by integrating all modules resulting in complex decoding. Consequently, it is quite difficult for non-experts to use/develop ASR systems for new applications, especially for new languages. End-to-end ASR has the goal of simplifying the above module-based architecture into a singlenetwork architecture within a deep learning framework, in order to address the above issues. There are two major types of end-to-end architectures for ASR: attention-based methods use an attention mechanism to perform alignment between acoustic frames and recognized symbols, and connectionist temporal classification (CTC), uses Markov 518 assumptions to efficiently solve sequential problems by dynamic programming (Chorowski et al., 2014; Graves and Jaitly, 2014). The attention-based end-to-end method solves the ASR problem as a sequence mapping from speech feature sequences to text by using encoderdecoder architecture. The decoder network uses an attention mechanism to find an alignment between each element of the output sequence and the hidden states generated by the acoustic encoder network for each frame of acoustic input (Chorowski et al., 2014, 2015; Chan et al., 2015; Lu et al., 2016). This basic temporal attention mechanism is too flexible in the sense that it allows extremely non-sequential alignments. This may be fine for applications such as machine translation where input and output word order are different (Bahdanau et al., 2014; Wu et al., 2016). However, in speech recognition, the feature inputs and corresponding letter outputs generally proceed in the same order. Another problem is that the input and output sequences in ASR can have very different lengths, and these vary greatly from case to case, depending on the speaking rate and writing system, making it more difficult to track the alignment. However, an advantage is that the attention mechanism does not require any conditional independence assumptions, and could address all the problems cited above. Although the alignment problems of attention-based mechanisms have been partially addressed in (Chorowski et al., 2014; Chorowski and Jaitly, 2016) using various mechanisms, here we propose more rigorous constraints by using CTC-based alignment to guide the decoding. CTC permits an efficient computation of a strictly monotonic alignment using dynamic programming (Graves et al., 2006; Graves and Jaitly, 2014) although it requires language models and graph-based decoding (Miao et al., 2015) except in the case of huge training data (Amodei et al., 2015; Soltau et al., 2016). We propose to take advantage of the constrained CTC alignment in a hybrid CTC/attention based system during decoding. The proposed method adopts a CTC/attention hybrid architecture, which was originally designed to regularize an attention-based encoder network by additionally using a CTC during training (Kim et al., 2017). The proposed method extends the architecture to perform one-pass/rescoring joint decoding, where hypotheses of attention-based ASR are boosted by scores obtained by using CTC outputs. This greatly reduces irregular alignments without any heuristic search techniques. The proposed method is applied to Japanese and Mandarin ASR tasks, which require extra linguistic resources including morphological analyzer (Kudo et al., 2004) or word segmentation (Xue et al., 2003) in addition to pronunciation dictionary to provide accurate lexicon and language models in conventional DNN/HMM ASR. Surprisingly, the method achieved performance comparable to, and in some cases superior to, several state-of-the-art DNN/HMM ASR systems, without using the above linguistic resources. 2 From DNN/HMM to end-to-end ASR This section briefly provides a formulation of conventional DNN/HMM ASR and CTC or attention based end-to-end ASR. 2.1 Conventional DNN/HMM ASR ASR deals with a sequence mapping from Tlength speech feature sequence X = {xt ∈ RD|t = 1, · · · , T} to N-length word sequence W = {wn ∈V|n = 1, · · · , N}. xt is a D dimensional speech feature vector (e.g., log Mel filterbanks) at frame t and wn is a word at position n in vocabulary V. ASR is mathematically formulated with the Bayes decision theory, where the most probable word sequence ˆW is estimated among all possible word sequences V∗as follows: ˆW = arg max W∈V∗p(W|X). (1) The posterior distribution p(W|X) is factorized into the following three distributions by using the Bayes theorem and introducing HMM state sequence S = {st ∈{1, · · · , J}|t = 1, · · · , T}: Eq. (1) ≈arg max W X S p(X|S)p(S|W)p(W). The three factors, p(X|S), p(S|W), and p(W), are acoustic, lexicon, and language models, respectively. These are further factorized by using a probabilistic chain rule and conditional independence assumption as follows:      p(X|S) ≈Q t p(st|xt) p(st) , p(S|W)≈Q t p(st|st−1, W), p(W) ≈Q n p(wn|wn−1, . . . , wn−m−1), 519 where the acoustic model is replaced with the product of framewise posterior distributions p(st|xt) computed by powerful DNN classifiers by using so-called pseudo likelihood trick (Bourlard and Morgan, 1994). p(st|st−1, W) is represented by an HMM state transition given W, and the conversion from W to HMM states is deterministically performed by using a pronunciation dictionary through a phoneme representation. p(wn|wn−1, . . . , wn−m−1) is obtained based on an (m −1)th-order Markov assumption as a mgram model. These conditional independence assumptions are often regarded as too strong assumption, leading to model mis-specification. Also, to train the framewise posterior p(st|xt), we have to provide a framewise state alignment st as a target, which is often provided by a GMM/HMM system. Thus, conventional DNN/HMM systems make the ASR problem formulated with Eq. (1) feasible by using factorization and conditional independence assumptions, at the cost of the problems discussed in Section 1. 2.2 Connectionist Temporal Classification (CTC) The CTC formulation also follows from Bayes decision theory (Eq. (1)). Note that the CTC formulation uses L-length letter sequence C = {cl ∈ U|l = 1, · · · , L} with a set of distinct letters U. Similarly to Section 2.1, by introducing framewise letter sequence with an additional ”blank” ( < b >) symbol Z = {zt ∈U ∪< b >|t = 1, · · · , T}, and by using the probabilistic chain rule and conditional independence assumption, the posterior distribution p(C|X) is factorized as follows: p(C|X) ≈ X Z Y t p(zt|zt−1, C)p(zt|X) | {z } ≜pctc(C|X) p(C) p(Z) (2) As a result, CTC has three distribution components similar to the DNN/HMM case, i.e., framewise posterior distribution p(zt|X), transition probability p(zt|zt−1, C)1, and prior distributions of letter and hidden-state sequences, 1Note that in the implementation, the transition value is not normalized (i.e., not a probabilistic value) (Graves and Jaitly, 2014; Miao et al., 2015), similar to the HMM state transition implementation (Povey et al., 2011) p(C) and p(Z), respectively. We also define the CTC objective function pctc(C|X) used in the later formulation. The framewise posterior distribution p(zt|X) is conditioned on all inputs X, and it is quite natural to be modeled by using bidirectional long short-term memory (BLSTM): p(zt|X) = Softmax(Lin(ht)) and ht = BLSTM(X). Softmax(·) is a sofmax activation function, and Lin(·) is a linear layer to convert hidden vector ht to a (|U|+1) dimensional vector (+1 means a blank symbol introduced in CTC). Although Eq. (2) has to deal with a summation over all possible Z, it is efficiently computed by using dynamic programming (Viterbi/forwardbackward algorithm) thanks to the Markov property. In summary, although CTC and DNN/HMM systems are similar to each other due to conditional independence assumptions, CTC does not require pronunciation dictionaries and omits an GMM/HMM construction step. 2.3 Attention mechanism Compared with hybrid and CTC approaches, the attention-based approach does not make any conditional independence assumptions, and directly estimates the posterior p(C|X) based on a probabilistic chain rule, as follows: p(C|X) = Y l p(cl|c1, · · · , cl−1, X) | {z } ≜patt(C|X) , (3) where patt(C|X) is an attention-based objective function. p(cl|c1, · · · , cl−1, X) is obtained by p(cl|c1, · · · , cl−1, X) = Decoder(rl, ql−1, cl−1) ht = Encoder(X) (4) alt = Attention({al−1}t, ql−1, ht) (5) rl = X t altht. (6) Eq. (4) converts input feature vectors X into a framewise hidden vector ht in an encoder network based on BLSTM, i.e., Encoder(X) ≜ BLSTM(X). Attention(·) in Eq. (5) is based on a content-based attention mechanism with convolutional features, as described in (Chorowski et al., 2015) (see Appendix A). alt is an attention weight, and represents a soft alignment of hidden vector ht for each output cl based on the weighted summation of hidden vectors to form letter-wise hidden vector rl in Eq. (6). A decoder network is another 520 recurrent network conditioned on previous output cl−1 and hidden vector ql−1, similar to RNNLM, in addition to letter-wise hidden vector rl. We use Decoder(·) ≜Softmax(Lin(LSTM(·))). Attention-based ASR does not explicitly separate each module, and potentially handles the all issues pointed out in Section 1. It implicitly combines acoustic models, lexicon, and language models as encoder, attention, and decoder networks, which can be jointly trained as a single deep neural network. Compared with DNN/HMM and CTC, which are based on a transition form from t −1 to t due to the Markov assumption, the attention mechanism does not maintain this constraint, and often provides irregular alignments. A major focus of this paper is to address this problem by using joint CTC/attention decoding. 3 Joint CTC/attention decoding This section explains a hybrid CTC/attention network, which potentially utilizes both benefits of CTC and attention in ASR. 3.1 Hybrid CTC/attention architecture Kim et al. (2017) uses a CTC objective function as an auxiliary task to train the attention model encoder within the multitask learning (MTL) framework, and this paper also uses the same architecture. Figure 1 illustrates the overall architecture of the framework, where the same BLSTM is shared with CTC and attention encoder networks, respectively). Unlike the sole attention model, the forward-backward algorithm of CTC can enforce monotonic alignment between speech and label sequences during training. That is, rather than solely depending on data-driven attention methods to estimate the desired alignments in long sequences, the forward-backward algorithm in CTC helps to speed up the process of estimating the desired alignment. The objective to be maximized is a logarithmic linear combination of the CTC and attention objectives, i.e., pctc(C|X) in Eq. (2) and patt(C|X) in Eq. (3): LMTL = λ log pctc(C|X) + (1 −λ) log patt(C|X), (7) with a tunable parameter λ : 0 ≤λ ≤1. 3.2 Decoding strategies The inference step of our joint CTC/attentionbased end-to-end speech recognition is performed ㅡ ㅡ ㅡ z2 … sos eos c1 q0 r0 r1 rL H h2 h4 q1 qL hT x1 x2 x3 x4 x5 x6 xT Shared Encoder CTC Attention Decoder x7 x8 z4 h6 h8 … c2 r2 q2 … c1 c2 … c1 c2 … Figure 1: Joint CTC/attention based end-to-end framework: the shared encoder is trained by both CTC and attention model objectives simultaneously. The shared encoder transforms our input sequence {xt · · · xT } into high level features H = {ht · · · hT }, and the attention decoder generates the letter sequence {c1 · · · cL}. by label synchronous decoding with a beam search similar to conventional attention-based ASR. However, we take the CTC probabilities into account to find a hypothesis that is better aligned to the input speech, as shown in Figure 1. Hereafter, we describe the general attention-based decoding and conventional techniques to mitigate the alignment problem. Then, we propose joint decoding methods with a hybrid CTC/attention architecture. 3.2.1 Attention-based decoding in general End-to-end speech recognition inference is generally defined as a problem to find the most probable letter sequence ˆC given the speech input X, i.e. ˆC = arg max C∈U∗log p(C|X). (8) In attention-based ASR, p(C|X) is computed by Eq. (3), and ˆC is found by a beam search technique. Let Ωl be a set of partial hypotheses of the length l. At the beginning of the beam search, Ω0 contains only one hypothesis with the starting symbol <sos> and the hypothesis score α(<sos>, X) is set to 0. For l = 1 to Lmax, each partial hypothesis in Ωl−1 is expanded by appending possible single letters, and the new hypotheses are stored in Ωl, where Lmax is the maximum 521 length of the hypotheses to be searched. The score of each new hypothesis is computed in the log domain as α(h, X) = α(g, X) + log p(c|g, X), (9) where g is a partial hypothesis in Ωl−1, c is a letter appended to g, and h is the new hypothesis such that h = g · c. If c is a special symbol that represents the end of a sequence, <eos>, h is added to ˆΩbut not Ωl, where ˆΩdenotes a set of complete hypotheses. Finally, ˆC is obtained by ˆC = arg max h∈ˆΩ α(h, X). (10) In the beam search process, Ωl is allowed to hold only a limited number of hypotheses with higher scores to improve the search efficiency. Attention-based ASR, however, may be prone to include deletion and insertion errors because of its flexible alignment property, which can attend to any portion of the encoder state sequence to predict the next label, as discussed in Section 2.3. Since attention is generated by the decoder network, it may prematurely predict the end-ofsequence label, even when it has not attended to all of the encoder frames, making the hypothesis too short. On the other hand, it may predict the next label with a high probability by attending to the same portions as those attended to before. In this case, the hypothesis becomes very long and includes repetitions of the same letter sequence. 3.2.2 Conventional decoding techniques To alleviate the alignment problem, a length penalty term is commonly used to control the hypothesis length to be selected (Chorowski et al., 2015; Bahdanau et al., 2016). With the length penalty, the decoding objective in Eq. (8) is changed to ˆC = arg max C∈U∗{log p(C|X) + γ|C|} , (11) where |C| is the length of the sequence C, and γ is a tunable parameter. However, it is actually difficult to completely exclude hypotheses that are too long or too short even if γ is carefully tuned. It is also effective to control the hypothesis length by the minimum and maximum lengths to some extent, where the minimum and maximum are selected as fixed ratios to the length of the input speech. However, since there are exceptionally long or short transcripts compared to the input speech, it is difficult to balance saving such exceptional transcripts and preventing hypotheses with irrelevant lengths. Another approach is the coverage term recently proposed in (Chorowski and Jaitly, 2016), which is incorporated in the decoding objective in Eq. (11) as ˆC = arg max C∈U∗{log p(C|X) + γ|C| +η · coverage(C|X)} , (12) where the coverage term is computed by coverage(C|X) = T X t=1 " L X l=1 alt > τ # . (13) η and τ are tunable parameters. The coverage term represents the number of frames that have received a cumulative attention greater than τ. Accordingly, it increases when paying close attention to some frames for the first time, but does not increase when paying attention again to the same frames. This property is effective for avoiding looping of the same label sequence within a hypothesis. However, it is still difficult to obtain a common parameter setting for γ, η, τ, and the optional min/max lengths so that they are appropriate for any speech data from different tasks. 3.2.3 Joint decoding Our joint CTC/attention approach combines the CTC and attention-based sequence probabilities in the inference step, as well as the training step. Suppose pctc(C|X) in Eq. (2) and patt(C|X) in Eq. (3) are the sequence probabilities given by CTC and the attention model. The decoding objective is defined similarly to Eq. (7) as ˆC = arg max C∈U∗{λ log pctc(C|X) +(1 −λ) log patt(C|X)} . (14) The CTC probability enforces a monotonic alignment that does not allow large jumps or looping of the same frames. Accordingly, it is possible to choose a hypothesis with a better alignment and exclude irrelevant hypotheses without relying on the coverage term, length penalty, or min/max lengths. In the beam search process, the decoder needs to compute a score for each partial hypothesis using Eq. (9). However, it is nontrivial to combine the CTC and attention-based scores in the beam 522 search, because the attention decoder performs it output-label-synchronously while CTC performs it frame-synchronously. To incorporate the CTC probabilities in the hypothesis score, we propose two methods. Rescoring The first method is a two-pass approach, in which the first pass obtains a set of complete hypotheses using the beam search, where only the attentionbased sequence probabilities are considered. The second pass rescores the complete hypotheses using the CTC and attention probabilities, where the CTC probabilities are obtained by the forward algorithm for CTC (Graves et al., 2006). The rescoring pass obtains the final result according to ˆC = arg max h∈ˆΩ {λαctc(h, X) + (1 −λ)αatt(h, X)} , (15) where ( αctc(h, X) ≜log pctc(h|X) αatt(h, X) ≜log patt(h|X) . (16) One-pass decoding The second method is one-pass decoding, in which we compute the probability of each partial hypothesis using CTC and an attention model. Here, we utilize the CTC prefix probability (Graves, 2008) defined as the cumulative probability of all label sequences that have the partial hypothesis h as their prefix: pctc(h, . . . |X) = X ν∈(U∪{<eos>})+ pctc(h · ν|X), and we define the CTC score as αctc(h, X) ≜log pctc(h, . . . |X), (17) where ν represents all possible label sequences except the empty string. The CTC score cannot be obtained recursively as in Eq. (9), but it can be computed efficiently by keeping the forward probabilities over the input frames for each partial hypothesis. Then it is combined with αatt(h, X). The beam search algorithm for one-pass decoding is shown in Algorithm 1. Ωl and ˆΩare initialized in lines 2 and 3 of the algorithm, which are implemented as queues that accept partial hypotheses of the length l and complete hypotheses, respectively. In lines 4–25, each partial hypothesis g in Ωl−1 is extended by each label c Algorithm 1 Joint CTC/attention one-pass decoding 1: procedure ONEPASSBEAMSEARCH(X,Lmax) 2: Ω0 ←{<sos>} 3: ˆΩ←∅ 4: for l = 1 . . . Lmax do 5: Ωl ←∅ 6: while Ωl−1 ̸= ∅do 7: g ←HEAD(Ωl−1) 8: DEQUEUE(Ωl−1) 9: for each c ∈U ∪{<eos>} do 10: h ←g · c 11: α(h,X)←λαctc(h,X)+(1−λ)αatt(h,X) 12: if c = <eos> then 13: ENQUEUE(ˆΩ, h) 14: else 15: ENQUEUE(Ωl, h) 16: if |Ωl| > beamWidth then 17: REMOVEWORST(Ωl) 18: end if 19: end if 20: end for 21: end while 22: if ENDDETECT(ˆΩ, l) = true then 23: break ▷exit for loop 24: end if 25: end for 26: return arg maxh∈ˆΩα(h, X) 27: end procedure in the label set U. Each extended hypothesis h is scored in line 11, where CTC and attentionbased scores are obtained by αctc() and αatt(). After that, if c = <eos>, the hypothesis h is assumed to be complete and stored in ˆΩin line 13. If c ̸= <eos>, h is stored in Ωl in line 15, where the number of hypotheses in Ωl is checked in line 16. If the number exceeds the beam width, the hypothesis with the worst score in Ωl is removed by REMOVEWORST() in line 17. In line 11, the CTC and attention model scores are computed for each partial hypothesis. The attention score is easily obtained in the same manner as Eq. (9), whereas the CTC score requires a modified forward algorithm that computes it label-synchronously. The algorithm to compute the CTC score is summarized in Appendix B. By considering the attention and CTC scores during the beam search, partial hypotheses with irregular alignments can be excluded, and the number of search errors is reduced. We can optionally apply an end detection technique to reduce the computation by stopping the beam search before l reaches Lmax. Function ENDDETECT(ˆΩ, l) in line 22 returns true if there is little chance of finding complete hypotheses with higher scores as l increases in the future. 523 In our implementation, the function returns true if M−1 X m=0 " max h∈ˆΩ:|h|=l−m α(h,X)−max h′∈ˆΩ α(h′, X)<Dend # =M, (18) where Dend and M are predetermined thresholds. This equation becomes true if complete hypotheses with smaller scores are generated M times consecutively. This technique is also available in attention-based decoding and rescoring methods described in Sections 3.2.1–3.2.3. 4 Experiments We used Japanese and Mandarin Chinese ASR benchmarks to show the effectiveness of the proposed joint CTC/attention decoding approach. The main reason for choosing these two languages is that those ideogram languages have relatively shorter lengths for letter sequences than those in alphabet languages, which reduces computational complexities greatly, and makes it easy to handle context information in a decoder network. Our preliminary investigation shows that Japanese and Mandarin Chinese end-to-end ASR can be easily scaled up, and shows state-of-the-art performance without using various tricks developed in English tasks. Also, we would like to emphasize that the system did not use language-specific processing (e.g., morphological analyzer, Pinyin dictionary), and simply used all appeared characters in their transcriptions including Japanese syllable and Kanji, Chinese, Arabic number, and alphabet characters, as they are. 4.1 Corpus of Spontaneous Japanese (CSJ) We demonstrated ASR experiments by using the Corpus of Spontaneous Japanese (CSJ) (Maekawa et al., 2000). CSJ is a standard Japanese ASR task based on a collection of monologue speech data including academic lectures and simulated presentations. It has a total of 581 hours of training data and three types of evaluation data, where each evaluation task consists of 10 lectures (totally 5 hours). As input features, we used 40 mel-scale filterbank coefficients, with their first and second order temporal derivatives to obtain a total of 120dimensional feature vector per frame. The encoder was a 4-layer BLSTM with 320 cells in each layer and direction, and linear projection layer is followed by each BLSTM layer. The 2nd and 3rd bottom layers of the encoder read every second hidden state in the network below, reducing the utterance length by the factor of 4. We used the content-based attention mechanism (Chorowski et al., 2015), where the 10 centered convolution filters of width 100 were used to extract the convolutional features. The decoder network was a 1-layer LSTM with 320 cells. The AdaDelta algorithm (Zeiler, 2012) with gradient clipping (Pascanu et al., 2012) was used for the optimization. Dend and M in Eq (18) were set as log 1e−10 and 3, respectively. The hybrid CTC/attention ASR was implemented by using the Chainer deep learning toolkit (Tokui et al., 2015). Table 1 first compares the character error rate (CER) for conventional attention and MTL based end-to-end ASR without the joint decoding. λ in Eq. (7) was set to 0.1. When decoding, we manually set the minimum and maximum lengths of output sequences by 0.025 and 0.15 times input sequence lengths, respectively. The length penalty γ in Eq. (11) was set to 0.1. Multitask learning (MTL) significantly outperformed attention-based ASR in the all evaluation tasks, which confirms the effectiveness of a hybrid CTC/attention architecture. Table 1 also shows that joint decoding, described in Section 3.2, further improved the performance without setting any search parameters (maximum and minimum lengths, length penalty), but only setting a weight parameter λ = 0.1 in Eq. (15) similar to the MTL case. Figure 2 also compares the dependency of λ on the CER for the CSJ evaluation tasks, and showing that λ was not so sensitive to the performance if we set λ around the value we used at MTL (i.e., 0.1). We also compare the performance of the proposed MTL-large, which has a larger network (5-layer encoder network), with the conventional state-of-the-art techniques obtained by using linguistic resources. The state-of-the-art CERs of GMM discriminative training and DNNsMBR/HMM systems are obtained from the Kaldi recipe (Moriya et al., 2015) and a system based on syllable-based CTC with MAP decoding (Kanda et al., 2016). The Kaldi recipe systems use academic lectures (236h) for AM training and all training-data transcriptions for LM training. Unlike the proposed method, these methods use linguistic resources including a morphological analyzer, pronunciation dictionary, and language model. Note that since the amount of training 524 Table 1: Character error rate (CER) for conventional attention and hybrid CTC/attention end-to-end ASR. Corpus of Spontaneous Japanese speech recognition (CSJ) task. Model Hour Task1 Task2 Task3 Attention 581 11.4 7.9 9.0 MTL 581 10.5 7.6 8.3 MTL + joint decoding (rescoring) 581 10.1 7.1 7.8 MTL + joint decoding (one pass) 581 10.0 7.1 7.6 MTL-large + joint decoding (rescoring) 581 8.4 6.2 6.9 MTL-large + joint decoding (one pass) 581 8.4 6.1 6.9 GMM-discr. (Moriya et al., 2015) 236 for AM, 581 for LM 11.2 9.2 12.1 DNN/HMM (Moriya et al., 2015) 236 for AM, 581 for LM 9.0 7.2 9.6 CTC-syllable (Kanda et al., 2016) 581 9.4 7.3 7.5 6.0   7.0   8.0   9.0   10.0   11.0   12.0   13.0   0.0   0.1   0.2   0.3   0.4   0.5   0.6   0.7   0.8   0.9   1.0   Character  Error  Rate  (%)   CTC  weight   Task1   Task2   Task3   Figure 2: The effect of weight parameter λ in Eq. (14) on the CSJ evaluation tasks (The CERs were obtained by one-pass decoding). data and experimental configurations of the proposed and reference methods are different, it is difficult to compare the performance listed in the table directly. However, since the CERs of the proposed method are superior to those of the best reference results, we can state that the proposed method achieves the state-of-the-art performance. 4.2 Mandarin telephone speech We demonstrated ASR experiments on HKUST Mandarin Chinese conversational telephone speech recognition (MTS) (Liu et al., 2006). It has 5 hours recording for evaluation, and we extracted 5 hours from training data as a development set, and used the rest (167 hours) as a training set. All experimental conditions were same as those in Section 4.1 except that we used the λ = 0.5 in training and decoding instead of 0.1 based on our preliminary investigation and 80 mel-scale filterbank coefficients with pitch features as suggested in (Miao et al., 2016). In decoding, we also added a result of the coverage-term based decoding (Chorowski and Jaitly, 2016), as discussed in Section 3.2 (η = 1.5, τ = 0.5, γ = −0.6 for attention model and η = 1.0, τ = 0.5, γ = −0.1 for MTL), since it was difficult to eliminate the irregular alignments during decoding by only tuning the maximum and minimum lengths and length penalty (we set the minimum and maximum lengths of output sequences by 0.0 and 0.1 times input sequence lengths, respectively and set γ = 0.6 in Table 2). Table 2 shows the effectiveness of MTL and joint decoding over the attention-based approach, especially showing the significant improvement of the joint CTC/attention decoding. Similar to the CSJ experiments in Section 4.1, we did not use the length-penalty term or the coverage term in joint decoding. This is an advantage of joint decoding over conventional approaches that require many tuning parameters. We also generated more training data by linearly scaling the audio lengths by factors of 0.9 and 1.1 (speed perturb.). The final model achieved 29.9% without using linguistic resources, which defeats moderate state-of-the-art systems including CTC-based methods2. 4.3 Decoding speed We evaluated the speed of the joint decoding methods described in Section 3.2.3. ASR decoding was performed with different beam widths of 1, 3, 5, 10, and 20, and the processing time and CER were measured using a computer with Intel(R) Xeon(R) processors, E5-2690 v3, 2.6 GHz. Although the processors were multicore CPUs and the computer had GPUs, we ran the decoding program as a 2 Although the proposed method did not reach the performance obtained by a time delayed neural network (TDNN) with lattice-free sequence discriminative training (Povey et al., 2016), our recent work scored 28.0%, and outperformed the lattice-free MMI result with advanced network architectures. 525 Table 2: Character error rate (CER) for conventional attention and hybrid CTC/attention end-to-end ASR. HKUST Mandarin Chinese conversational telephone speech recognition (MTS) task. Model dev eval Attention 40.3 37.8 MTL 38.7 36.6 Attention + coverage 39.4 37.6 MTL + coverage 36.9 35.3 MTL + joint decoding (rescoring) 35.9 34.2 MTL + joint decoding (one pass) 35.5 33.9 MTL-large (speed perturb.) + joint decoding (rescoring) 31.1 30.1 MTL-large (speed perturb.) + joint decoding (one pass) 31.0 29.9 DNN/HMM – 35.9 LSTM/HMM (speed perturb.) – 33.5 CTC with language model (Miao et al., 2016) – 34.8 TDNN/HMM, lattice-free MMI (speed perturb.) (Povey et al., 2016) – 28.2 single-threaded process on a CPU to investigate its basic computational cost. Table 3: RTF versus CER for the one-pass and rescoring methods. Beam Rescoring One pass Task width RTF CER RTF CER 1 0.66 10.9 0.66 10.7 CSJ 3 1.11 10.3 1.02 10.1 Task1 5 1.50 10.2 1.31 10.0 10 2.46 10.1 2.07 10.0 20 5.02 10.1 3.76 10.0 1 0.68 37.1 0.65 35.9 HKUST 3 0.89 34.9 0.86 34.4 Eval set 5 1.04 34.6 1.03 34.2 10 1.55 34.4 1.50 34.0 20 2.66 34.2 2.55 33.9 Table 3 shows the relationships between the real-time factor (RTF) and the CER for the CSJ and HKUST tasks. We evaluated the rescoring and one-pass decoding methods when using the end detection in Eq. (18). In every beam width, we can see that the one-pass method runs faster with an equal or lower CER than the rescoring method. This result demonstrates that the one-pass decoding is effective for reducing search errors. Finally, we achieved 1xRT with one-pass decoding when using a beam width around 3 to 5, even though it was a single-threaded process on a CPU. However, the decoding process has not yet achieved realtime ASR since CTC and the attention mechanism need to access all of the frames of the input utterance even when predicting the first label. This is an essential problem of most end-to-end ASR approaches and will be solved in future work. 5 Summary and discussion This paper proposes end-to-end ASR by using joint CTC/attention decoding, which outperformed ordinary attention-based end-to-end ASR by solving the misalignment issues. The joint decoding methods actually reduced most of the irregular alignments, which can be confirmed from the examples of recognition errors and alignment plots shown in Appendix C. The proposed end-to-end ASR does not require linguistic resources, such as morphological analyzer, pronunciation dictionary, and language model, which are essential components of conventional Japanese and Mandarin Chinese ASR systems. Nevertheless, the method achieved comparable/superior performance to the state-of-theart conventional systems for the CSJ and MTS tasks. In addition, the proposed method does not require GMM/HMM construction for initial alignments, DNN pre-training, lattice generation for sequence discriminative training, complex search in decoding (e.g., FST decoder or lexical tree search based decoder). Thus, the method greatly simplifies the ASR building process, reducing code size and complexity. Future work will apply this technique to the other languages including English, where we have to solve an issue of long sequence lengths, which requires heavy computation cost and makes it difficult to train a decoder network. Actually, neural machine translation handles this issue by using a sub word unit (concatenating several letters to form a new sub word unit) (Wu et al., 2016), which would be a promising direction for end-toend ASR. 526 References Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. 2015. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595 . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. 2016. Endto-end attention-based large vocabulary speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pages 4945–4949. Steven Bird. 2006. NLTK: the natural language toolkit. In Joint conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics (COLING/ACL) on Interactive presentation sessions. pages 69–72. Herv´e Bourlard and Nelson Morgan. 1994. Connectionist speech recognition: A hybrid approach. Kluwer Academic Publishers. William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. 2015. Listen, attend and spell. arXiv preprint arXiv:1508.01211 . Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. End-to-end continuous speech recognition using attention-based recurrent NN: First results. arXiv preprint arXiv:1412.1602 . Jan Chorowski and Navdeep Jaitly. 2016. Towards better decoding and language model integration in sequence to sequence models. arXiv preprint arXiv:1612.02695 . Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Advances in Neural Information Processing Systems (NIPS). pages 577–585. Alex Graves. 2008. Supervised sequence labelling with recurrent neural networks. PhD thesis, Technische Universit¨at M¨unchen . Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In International Conference on Machine learning (ICML). pages 369–376. Alex Graves and Navdeep Jaitly. 2014. Towards endto-end speech recognition with recurrent neural networks. In International Conference on Machine Learning (ICML). pages 1764–1772. Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29(6):82–97. Frederick Jelinek. 1976. Continuous speech recognition by statistical methods. Proceedings of the IEEE 64(4):532–556. Naoyuki Kanda, Xugang Lu, and Hisashi Kawai. 2016. Maximum a posteriori based decoding for CTC acoustic models. In Interspeech 2016. pages 1868– 1872. Suyoun Kim, Takaaki Hori, and Shinji Watanabe. 2017. Joint CTC-attention based end-to-end speech recognition using multi-task learning. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pages 4835–4839. Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying conditional random fields to japanese morphological analysis. In Conference on Empirical Methods on Natural Language Processing (EMNLP). volume 4, pages 230–237. Yi Liu, Pascale Fung, Yongsheng Yang, Christopher Cieri, Shudong Huang, and David Graff. 2006. HKUST/MTS: A very large scale mandarin telephone speech corpus. In Chinese Spoken Language Processing, Springer, pages 724–735. Liang Lu, Xingxing Zhang, and Steve Renals. 2016. On training the recurrent neural network encoderdecoder for large vocabulary end-to-end speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pages 5060–5064. Kikuo Maekawa, Hanae Koiso, Sadaoki Furui, and Hitoshi Isahara. 2000. Spontaneous speech corpus of japanese. In International Conference on Language Resources and Evaluation (LREC). volume 2, pages 947–952. Yajie Miao, Mohammad Gowayyed, and Florian Metze. 2015. EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). pages 167–174. Yajie Miao, Mohammad Gowayyed, Xingyu Na, Tom Ko, Florian Metze, and Alexander Waibel. 2016. An empirical exploration of ctc acoustic models. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pages 2623–2627. Takafumi Moriya, Takahiro Shinozaki, and Shinji Watanabe. 2015. Kaldi recipe for Japanese spontaneous speech recognition and its evaluation. In Autumn Meeting of ASJ. 3-Q-7. 527 Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063 . Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The kaldi speech recognition toolkit. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahrmani, Vimal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur. 2016. Purely sequence-trained neural networks for asr based on lattice-free MMI. In Interspeech. pages 2751–2755. Hagen Soltau, Hank Liao, and Hasim Sak. 2016. Neural speech recognizer: Acoustic-to-word lstm model for large vocabulary speech recognition. arXiv preprint arXiv:1610.09975 . Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: a next-generation open source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in NIPS. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 . Nianwen Xue et al. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing 8(1):29–48. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . A Location-based attention mechanism This section provides the equations of a locationbased attention mechanism Attention(·) in Eq. (5). alt = Attention({al−1}t, ql−1, ht), where {al−1}t = [al−1,1, · · · , al−1,T ]⊤. To obtain alt, we use the following equations: {ft}t = K ∗al−1 (19) elt = g⊤tanh(Gqql−1 + Ghht + Gfft + b) (20) alt = exp(etl) P t exp(etl) (21) K, Gq, Gh, Gf are matrix parameters. b and g are vector parameters. ∗denotes convolution along input feature axis t with matrix K to produce feature {ft}t. Algorithm 2 CTC hypothesis score 1: function αCTC(h, X) 2: g, c ←h ▷split h into the last label c and the rest g 3: if c = <eos> then 4: return log{γ(n) T (g) + γ(b) T (g)} 5: else 6: γ(n) 1 (h) ←  p(z1 = c|X) if g = <sos> 0 otherwise 7: γ(b) 1 (h) ←0 8: Ψ ←γ(n) 1 (h) 9: for t = 2 . . . T do 10: Φ ←γ(b) t−1(g) +  0 if last(g)=c γ(n) t−1(g) otherwise 11: γ(n) t (h) ←  γ(n) t−1(h) + Φ  p(zt = c|X) 12: γ(b) t (h) ←  γ(b) t−1(h) + γ(n) t−1(h)  p(zt = <b>|X) 13: Ψ ←Ψ + Φ · p(zt = c|X) 14: end for 15: return log(Ψ) 16: end if 17: end function B CTC-based hypothesis score The CTC score αctc(h, X) in Eq. (17) is computed as shown in Algorithm 2. Let γ(n) t (h) and γ(b) t (h) be the forward probabilities of the hypothesis h over the time frames 1 . . . t, where the superscripts (n) and (b) denote different cases in which all CTC paths end with a nonblank or blank symbol, respectively. Before starting the beam search, γ(n) t () and γ(b) t () are initialized for t = 1, . . . , T as γ(n) t (<sos>) = 0, (22) γ(b) t (<sos>)= tY τ=1 γ(b) τ−1(<sos>)p(zτ =<b>|X), (23) where we assume that γ(b) 0 (<sos>) = 1 and <b> is a blank symbol. Note that the time index t and input length T may differ from those of the input utterance X owing to the subsampling technique for the encoder (Povey et al., 2016; Chan et al., 2015). In Algorithm 2, the hypothesis h is first split into the last label c and the rest g in line 2. If c is <eos>, it returns the logarithm of the forward probability assuming that h is a complete hypothesis in line 4. The forward probability of h is given by pctc(h|X) = γ(n) T (g) + γ(b) T (g) (24) according to the definition of γ(n) t () and γ(b) t (). If c is not <eos>, it computes the forward proba528 bilities γ(n) t (h) and γ(b) t (h), and the prefix probability Ψ = pctc(h, . . . |X) assuming that h is not a complete hypothesis. The initialization and recursion steps for those probabilities are described in lines 6–14. In this function, we assume that whenever we compute the probabilities γ(n) t (h), γ(b) t (h) and Ψ, the forward probabilities γ(n) t (g) and γ(b) t (g) have already been obtained through the beam search process because g is a prefix of h such that |g| < |h|. C Examples of irregular alignments We list examples of irregular alignments caused by attention-based ASR. Figure 3 shows an example of repetitions of word chunks. The first chunk of blue characters in attention-based ASR (MTL) is appeared again, and the whole second chunk part becomes insertion errors. Figure 4 shows an example of deletion errors. The latter half of the sentence in attention-based ASR (MTL) is broken, which causes deletion errors. The hybrid CTC/attention with both multitask learning and joint decoding avoids these issues. Figures 5 and 6 show alignment plots corresponding to Figs. 3 and 4, respectively, where X-axis shows time frames and Y-axis shows the character sequence hypothesis. These visual plots also demonstrate that the proposed joint decoding approach can suppress irregular alignments. id: (20040717_152947_A010409_B010408-A-057045-057837) Reference 但 是 如 果 你 想 想 如 果 回 到 了 过 去 你 如 果 带 着 这 个 现 在 的 记 忆 是 不 是 很 痛 苦 啊 MTL Scores: (#Correctness #Substitution #Deletion #Insertion) 28 2 3 45 但 是 如 果 你 想 想 如 果 回 到 了 过 去 你 如 果 带 着 这 个 现 在 的 节 如 果 你 想 想 如 果 回 到 了 过 去 你 如 果 带 着 这 个 现 在 的 节 如 果 你 想 想 如 果 回 到 了 过 去 你 如 果 带 着 这 个 现 在 的 机 是 不 是 很 ・ ・ ・ Joint decoding Scores: (#Correctness #Substitution #Deletion #Insertion) 31 1 1 0 HYP: 但 是 如 果 你 想 想 如 果 回 到 了 过 去 你 如 果 带 着 这 个 现 在 的 ・ 机 是 不 是 很 痛 苦 啊 Figure 3: Example of insertion errors appeared in attention-based ASR with MTL and joint decoding. id: (A01F0001_0844951_0854386) Reference ま た え 飛 行 時 の エ コ ー ロ ケ ー シ ョ ン 機 能 を よ り 詳 細 に 解 明 す る 為 に 超 小 型 マ イ ク ロ ホ ン お よ び 生 体 ア ン プ を コ ウ モ リ に 搭 載 す る こ と を 考 え て お り ま す そ う す る こ と に よ っ て MTL Scores: (#Correctness #Substitution #Deletion #Insertion) 30 0 47 0 ま た え 飛 行 時 の エ コ ー ロ ケ ー シ ョ ン 機 能 を よ り 詳 細 に 解 明 す る 為 ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ に ・ ・ ・ Joint decoding Scores: (#Correctness #Substitution #Deletion #Insertion) 67 9 1 0 ま た え 飛 行 時 の エ コ ー ロ ケ ー シ ョ ン 機 能 を よ り 詳 細 に 解 明 す る 為 に 長 国 型 マ イ ク ロ ホ ン お ・ い く 声 単 位 方 を コ ウ モ リ に 登 載 す る こ と を 考 え て お り ま す そ う す る こ と に よ っ て Figure 4: Example of deletion errors appeared in attention-based ASR with MTL and joint decoding. (a) MTL (b) Joint decoding Figure 5: Example of alignments including insertion errors in attention-based ASR with MTL and joint decoding (Utterance id: 20040717 152947 A010409 B010408-A057045-057837). (a) MTL (b) Joint decoding Figure 6: Example of alignments including deletion errors in attention-based ASR with MTL and joint decoding (Utterance id: A01F0001 0844951 0854386). 529
2017
48
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 530–540 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1049 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 530–540 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1049 Found in Translation: Reconstructing Phylogenetic Language Trees from Translations Ella Rabinovich△⋆ Noam Ordan† Shuly Wintner⋆ △IBM Research Haifa, Israel ⋆Department of Computer Science, University of Haifa, Israel †The Arab College for Education, Haifa, Israel {ellarabi,noam.ordan}@gmail.com, [email protected] Abstract Translation has played an important role in trade, law, commerce, politics, and literature for thousands of years. Translators have always tried to be invisible; ideal translations should look as if they were written originally in the target language. We show that traces of the source language remain in the translation product to the extent that it is possible to uncover the history of the source language by looking only at the translation. Specifically, we automatically reconstruct phylogenetic language trees from monolingual texts (translated from several source languages). The signal of the source language is so powerful that it is retained even after two phases of translation. This strongly indicates that source language interference is the most dominant characteristic of translated texts, overshadowing the more subtle signals of universal properties of translation. 1 Introduction Translation has played a major role in human civilization since the rise of law, religion, and trade in multilingual societies. Evidence of scribe translations goes as far back as four millennia ago, to the time of Hammurabi; this practice is also mentioned in the Bible (Esther 1:22; 8:9). For thousands of years, translators have tried to remain invisible, setting a standard according to which the act of translation should be seamless, and its product should look as if it were written originally in the target language. Cicero (106-43 BC) commented on his translation ethics, “I did not hold it necessary to render word for word, but I preserved the general style and force of the language.” These words were echoed 500 years later by St. Jerome (347-420 CE), also known as the patron saint of translators, who wrote, “I render, not word for word, but sense for sense.” Translator tendency for invisibility has peaked in the past 150 years in the English speaking world (Venuti, 2008), in spite of some calls for “foreignization” in translations, e.g., the German Romanticists, especially the translations from Greek by Friedrich H¨olderlin (Steiner, 1975) and Nabokov’s translation of Eugene Onegin. These, however, as both Steiner (1975) and Venuti (2008) argue, are the exception to the rule. In fact, in recent years, the quality of translations has been standardized (ISO 17100). Importantly, the translations we studied in our work conform to this standard. Despite the continuous efforts of translators, translations are known to feature unique characteristics that set them apart from non-translated texts, referred to as originals here (Toury, 1980, 1995; Frawley, 1984; Baker, 1993). This is not the result of poor translation, but rather a statistical phenomenon: various features distribute differently in originals than in translations (Gellerstam, 1986). Several factors may account for the differences between originals and translations; many are classified as universal features of translation. Cognitively speaking, all translations, regardless of the source and target language, are susceptible to the same constraints. Therefore, translation products are expected to share similar artifacts. Such universals include simplification: the tendency to make complex source structures simpler in the target (Blum-Kulka and Levenston, 1983; Vanderauwerea, 1985); standardization: the tendency to over-conform to target language standards (Toury, 1995); and explicitation: the tendency to render implicit source structures more explicit in the target language (Blum-Kulka, 1986; Øver˚as, 1998). In contrast to translation universals, interference reflects the “fingerprints” of the source lan530 guage on the translation product. Toury (1995) defines interference as “phenomena pertaining to the make-up of the source text tend to be transferred to the target text”. Interference, by definition, is a language-pair specific phenomenon; isomorphic structures shared by the source and target languages can easily replace one another, thereby manifesting the underlying process of cross-linguistic influence of the source language on the translation outcome. Pym (2008) points out that interference is a set of both segmentational and macrostructural features. Our main hypothesis is that, due to interference, languages with shared isomorphic structures are likely to share more features in the target language of a translation. Consequently, the distance between two languages, when assessed using such features, can be retained to some extent in translations from these two languages to a third one. Furthermore, we hypothesize that by extracting structures from translated texts, we can generate a phylogenetic tree that reflects the “true” distances among the source languages. Finally, we conjecture that the quality of such trees will improve when constructed using features that better correspond to interference phenomena, and will deteriorate using more universal features of translation. The main contribution of this paper is thus the demonstration that interference phenomena in translation are powerful to an extent that facilitates clustering source languages into families and (partially) reconstructing intra-families ties; so much so, that these results hold even after two rounds of translation. Moreover, we perform analysis of various linguistic phenomena in the source languages, laying out quantitative grounds for the language typology reconstruction results. 2 Related work A number of works in historical linguistics have applied methods from the field of bioinformatics, in particular algorithms for generating phylogenetic trees (Ringe et al., 2002; Nakhleh et al., 2005a,b; Ellison and Kirby, 2006; Boc et al., 2010). Most of them rely on lists of cognates, words in multiple languages with a common origin that share a similar meaning and a similar pronunciation (Dyen et al., 1992; Rexov´a et al., 2003). These works all rely on multilingual data, whereas we construct phylogenetic trees from texts in a single language. The claim that translations exhibit unique properties is well established in translation studies literature (Toury, 1980; Frawley, 1984; Baker, 1993; Toury, 1995). Based on this assumption, several works use text classification techniques employing supervised, and recently also unsupervised, machine learning approaches, to distinguish between originals and translations (Baroni and Bernardini, 2006; Ilisei et al., 2010; Koppel and Ordan, 2011; Volansky et al., 2015; Rabinovich and Wintner, 2015; Avner et al., 2016). The features used in these studies reflect both universal and interference-related traits. Along the way, interference was proven to be a robust phenomenon, operating in every single sentence, even on the morpheme level (Avner et al., 2016). Interference can also be studied on pairs of source- and target languages and focus, for example, on word order (Eetemadi and Toutanova, 2014). The powerful signal of interference is evident, e.g., by the finding that a classifier trained to distinguish between originals and translations from one language, exhibits lower accuracy when tested on translations from another language, and this accuracy deteriorates proportionally to the distance between the source and target languages (Koppel and Ordan, 2011). Consequently, it is possible to accurately distinguish among translations from various source languages (van Halteren, 2008). A related task, identifying the native tongue of English language students based only on their writing in English, has been the subject of recent interest (Tetreault et al., 2013). The relations between this task and identification of the source language of translation has been emphazied, e.g., by Tsvetkov et al. (2013). English texts produced by native speakers of a variety of languages have been used to reconstruct phylogenetic trees, with varying degrees of success (Nagata and Whittaker, 2013; Berzak et al., 2014). In contrast to language learners, however, translators translate into their mother tongue, so the texts we studied were written by highly competent native speakers. Our work is the first to construct phylogenetic trees from translations. 3 Methodology 3.1 Dataset This corpus-based study uses Europarl (Koehn, 2005), the proceedings of the European Parliament and their translations into all the official Eu531 ropean Union (EU) languages. Europarl is one of the most popular parallel resources in natural language processing, and has been used extensively in machine translation. We use a version of Europarl spanning the years 1999 through 2011, in which the direction of translation has been established through a comprehensive cross-lingual validation of the speakers’ original language (Rabinovich et al., 2015). All parliament speeches were translated1 from the original language into all other EU languages (21 at the time) using English as an intermediate, pivot language. We thus refer to translations into English as direct, while translations into all other languages, via English as a third language, are indirect. We hypothesize that indirect translation will obscure the markers of the original language in the final translation. Nevertheless, we expect (weakened) fingerprints of the source language to be identifiable in the target despite the pivot, presumably resulting in somewhat poorer phylogenetic trees. We focus on 17 source languages, grouped into 3 language families: Germanic, Romance, and Balto-Slavic.2 These include translations to English and to French from Bulgarian (BG), Czech (CS), Danish (DA), Dutch (NL), English (EN), French (FR), German (DE), Italian (IT), Latvian (LV), Lithuanian (LT), Polish (PL), Portuguese (PT), Romanian (RO), Slovak (SK), Slovenian (SL), Spanish (ES), and Swedish (SV). We also included texts written originally in English and French. All datasets were split on sentence boundary, cleaned (empty lines removed), tokenized, and annotated for part-of-speech (POS) using the Stanford tools (Manning et al., 2014). In all the tree reconstruction experiments, we sampled equal-sized chunks from each source language, using as much data as available for all languages. This yielded 27, 000 tokens from translations to English, and 30, 000 tokens from translations into French. 1The common practice is that one translates into one’s native language; in particular, this practice is strictly imposed in the EU parliament where a translator must have perfect proficiency in the target language, meeting very high standards of accuracy. 2We excluded source languages with insufficient amounts of data, along with Greek, which is the only representative of the Hellenic family. 3.2 Features Following standard practice (Volansky et al., 2015; Rabinovich and Wintner, 2015), we represented both original and translated texts as feature vectors, where the choice of features determines the extent to which we expect sourcelanguage interference to be present in the translation product. Crucially, the features abstract away from the contents of the texts and focus on their structure, reflecting, among other things, morphological and syntactic patterns. We use the following feature sets: 1. The top-1,000 most frequent POS trigrams, reflecting shallow syntactic structure. 2. Function words (FW), words known to reflect grammar of texts in numerous classification tasks, as they include non-content words such as articles, prepositions, etc. (Koppel and Ordan, 2011).3 3. Cohesive markers (Hinkel, 2001); these words and phrases are assumed to be overrepresented in translated texts, where, for example, an implicit contrast in the original is made explicit in the target text with words such as ‘but’ or ‘however’.4 Note that the first two feature sets are strongly associated with interference, whereas the third is assumed to be universal and an instance of explicitation. We therefore expect trees based on the first two feature sets to be much better than those based on the third. 3.3 The Indo-European phylogenetic tree The last few decades produced a large body of research on the evolution of individual languages and language families. While the existence of the Indo-European (IE) family of languages is an established fact, its history and origins are still a matter of much controversy (Pereltsvaig and Lewis, 2015). Furthermore, the actual subgroupings of languages within this family are not clear-cut (Ringe et al., 2002). Consequently, algorithms that attempt to reconstruct the IE languages tree face a serious evaluation challenge (Ringe et al., 2002; Rexov´a et al., 2003; Nakhleh et al., 2005a). To evaluate the quality of the reconstructed trees, we define a metric to accurately assess their distance from the “true” tree. The tree that we use as ground truth (Serva and Petroni, 2008) has 3For French we used the list of FW available at https: //code.google.com/archive/p/stop-words/. 4For French we used http://utilisateurs. linguist.univ-paris-diderot.fr/˜croze/D/ Lexconn.xml. 532 several advantages. First, it is similar to a wellaccepted tree (Gray and Atkinson, 2003) (which is not insusceptible to criticism (Pereltsvaig and Lewis, 2015)). The differences between the two are mostly irrelevant for the group of languages that we address in this research. Second, it is a binary tree, facilitating comparison with the trees we produce, which are also binary branching. Third, its branches are decorated with the approximate year in which splitting occurred. This provides a way to induce the distance between two languages, modeled as lengths of paths in the tree, based on chronological information. We projected the gold tree (Serva and Petroni, 2008) onto the set of 17 languages we considered in this work, preserving branch lengths. Figure 1 depicts the resulting gold-standard subtree. Figure 1: Gold standard tree, pruned We reconstructed phylogenetic language trees by performing agglomerative (hierarchical) clustering of feature vectors extracted separately from English and French translations. We performed clustering using the variance minimization algorithm (Ward Jr, 1963) with Euclidean distance (the implementation available in the Python SciPy library). All feature values were normalized to a zero-one scale prior to clustering. 3.4 Evaluation methodology To evaluate the quality of the trees we generate, we compute their similarity to the gold standard via two metrics: unweighted, assessing only structural (topological) similarity, and weighted, estimating similarity based on both structure and branching length. Several methods have been proposed for evaluating the quality of phylogenetic language trees (Pompei et al., 2011; Wichmann and Grant, 2012; Nouri and Yangarber, 2016). A popular metric is the Robinson-Foulds (RF) methodology (Robinson and Foulds, 1981), which is based on the symmetric difference in the number of bi-partitions, the ways in which an edge can split the leaves of a tree into two sets. The distance between two trees is then defined as the number of splits induced by one of the trees, but not the other. Despite its popularity, the RF metric has well-known shortcomings; for example, relocating a single leaf can result in a tree maximally distant from the original one (B¨ocker et al., 2013). Additional methodologies for evaluating phylogenetic trees include branch score distance (Kuhner and Felsenstein, 1994), enhancing RF with branch lengths, purity score (Heller and Ghahramani, 2005), and subtree score (Teh et al., 2009). The latter two ignore branch lengths and only consider structural similarities for evaluation. We opted for a simple yet powerful adaptation of the L2-norm to leaf-pair distance, inherently suitable for both unweighted and weighted evaluation. Given a tree of N leaves, li, i ∈[1..N], the weighted distance between two leaves li, lj in a tree τ, denoted Dτ(li, lj), is the sum of the weights of all edges on the shortest path between li and lj. The unweighted distance sums up the number of the edges in this path (i.e., all weights are equal to 1). The distance Dist(τ, g) between a generated tree τ and the gold tree g is then calculated by summing the square differences between all leafpair distances (whether weighted or unweighted) in the two trees: Dist(τ, g) = X i,j∈[1..N];i̸=j (Dτ(li, lj) −Dg(li, lj))2 4 Detection of Translations and their Source Language 4.1 Identification of translation We first reconfirmed that originals and translations are easily separable, extending results of supervised classification of O vs. T (where O refers to original English texts, and T to translated English) (Baroni and Bernardini, 2006; van Halteren, 2008; Volansky et al., 2015) to the 16 original languages considered in this work. We also conducted similar experiments with French originals and translations. We used 200 chunks of approximately 2K 533 tokens (respecting sentence boundaries) from both O and T, and normalized the values of lexical features by the number of tokens in each chunk. For classification, we used Platt’s sequential minimal optimization algorithm (Keerthi et al., 2001; Hall et al., 2009) to train support vector machine classifiers with the default linear kernel. We evaluated the results with 10-fold cross-validation. Table 1 presents the classification accuracy of (English and French) O vs. T using each feature set. In line with previous works (Ilisei et al., 2010; Volansky et al., 2015; Rabinovich and Wintner, 2015), the binary classification results are highly accurate, achieving over 95% accuracy using POS-trigrams and function words for both English and French, and above 85% using cohesive markers. Feature English French POS-trigrams 97.60 98.40 Function words 96.45 95.15 Cohesive markers 86.50 85.25 Table 1: Classification accuracy (%) of English and French O vs. T 4.2 Identification of source language Identifying the source language of translated texts is a task in which machines clearly outperform humans (Baroni and Bernardini, 2006). Koppel and Ordan (2011) performed 5-way classification of texts translated from Italian, French, Spanish, German, and Finnish, achieving an accuracy of 92.7%. Furthermore, misclassified instances were more frequently assigned to genetically related languages. We extended this experiment to 14 languages representing 3 language families (the number of languages was limited by the amount of data available). We extracted 100 chunks of 1,000 tokens each from each source language and classified the translated English (and, separately, French) texts into 14 classes using the best performing POStrigrams feature set. Cross-validation evaluation yielded an accuracy of 75.61% on English translations (note that the baseline is 100/14 = 7.14%). The corresponding confusion matrix, presented in Figure 2 (left), reveals interesting phenomena: much of the confusion resides within language families, framed by the bold line in the figure. For example, instances of Germanic languages are almost perfectly classified as Germanic, with only a few chunks assigned to other language families. The evident intra-family linguistic ties exposed by this experiment support the intuition that cross-linguistic transfer in translation is governed by typological properties of the source language. That is, translations from related sources tend to resemble each other to a greater extent than translations from more distant languages. This observation is further supported by the evaluation of a three-way classification task, where the goal is to only identify the language family (Germanic, Romance, or Balto-Slavic): the accuracy of this task is 90.62%. Note also that the mis-classified instances of both Romance and Germanic languages are nearly never attributed to Balto-Slavic languages, since Germanic and Romance are much closer to each other than to BaltoSlavic. Figure 2 (right) displays a similar confusion matrix, the only difference being that French translations are classified. We attribute the lower cross-validation accuracy (48.92%, reflected also by the lower number of correctly assigned instances on the matrix diagonal, compared to English) to the intervention of the pivot language in the translation process. Nevertheless, the confusion is still mainly constrained to intra-family boundaries. 5 Reconstruction of Phylogenetic Language Trees 5.1 Reconstructing language typology Inspired by the results reported in Section 4.2, we generated phylogenetic language trees from both English and French texts translated from the other European languages. We hypothesized that interference from the source language was present in the translation product to an extent that would facilitate the construction of a tree sufficiently similar to the gold IE tree (Figure 1). The best trees, those closest to the gold standard, were generated using POS-trigrams: these are the features that are most closely associated with source-language interference (see Section 3.2). Figure 3 depicts the trees produced from English and French translations using POStrigrams. Both trees reasonably group individual languages into three language-family branches. In particular, they cluster the Germanic and Romance languages closer than the Balto-Slavic. Capturing the more subtle intra-family ties turned out to be 534 Figure 2: Confusion matrix of 14-way classification of English (left) and French (right) translations. The actual class is represented by rows and the predicted one by columns. Figure 3: Phylogenetic language trees generated with English (left) and French (right) translations more challenging, although English outperformed its French counterpart on this task by almost perfectly reconstructing the Germanic sub-tree. We repeated the clustering experiments with various feature sets. For each feature set, we randomly sampled equally-sized subsets of the dataset (translated from each of the source languages), represented the data as feature vectors, generated a tree by clustering the feature vectors, and then computed the weighted and unweighted distances between the generated tree and the gold standard. We repeated this procedure 50 times for each feature set, and then averaged the resulting distances. We report this average and the standard deviation.5 5All the trees, both cladograms (with branches of equal length) and phylograms (with branch lengths proportional to 5.2 Evaluation results The unweighted evaluation results are listed in Table 2. For comparison, we also present the distance obtained for a random tree, generated by sampling a random distance matrix from the uniform (0, 1) distribution. The reported random tree evaluation score is averaged over 1000 experiments. Similarly, we present weighted evaluation results in Table 3. All distances are normalized to a zero-one scale, where the bounds – zero and one – represent the identical and the most distant tree w.r.t. the gold standard, respectively. The results reveal several interesting observations. First, as expected, POS-trigrams induce the distance between two nodes), can be found at http:// cl.haifa.ac.il/projects/translationese/ acl2017_found-in-translation_trees.pdf 535 Target language English French Feature AVG STD AVG STD POS-trigrams + FW 0.362 0.07 0.367 0.06 POS-trigrams 0.353 0.06 0.399 0.08 Function words 0.429 0.07 0.450 0.08 Cohesive markers 0.626 0.16 0.678 0.14 Random tree 0.724 0.07 0.724 0.07 Table 2: Unweighted evaluation of generated trees. AVG represents the average distance of a tree from the gold standard. The lowest distance in a column is boldfaced. Target language English French Feature AVG STD AVG STD POS-trigrams + FW 0.278 0.03 0.348 0.02 POS-trigrams 0.301 0.03 0.351 0.03 Function words 0.304 0.03 0.376 0.05 Cohesive markers 0.598 0.12 0.636 0.07 Random tree 0.676 0.10 0.676 0.10 Table 3: Weighted evaluation of generated trees. AVG represents the average distance of a tree from the gold standard. The lowest distance in a column is boldfaced. trees closest to the gold standard among distinct feature sets. This corroborates our hypothesis that this feature set carries over interference of the source language to a considerable extent (see Section 1). Furthermore, function words achieve more moderate results, but still much better than random. This reflects the fact that these features carry over some grammatical constructs of the source language into the translation product. Finally, in all cases, the least accurate tree, nearly random, is produced by cohesive markers; this is an evidence that this feature is sourcelanguage agnostic and reflects the universal effect of explicitation (see Section 3.2). While cohesive markers are a good indicator of translations, they reflect properties that are not indicative of the source language. The combination of POS-trigrams and FW yields the best tree in three out of four cases, implying that these feature sets capture different, complementary aspects of the source-language interference. Surprisingly, reasonably good trees were also generated from French translations; yet, these trees are systematically worse than their English counterparts. The original signal of the source language is distorted twice: first via a Germanic language (English) and then via a Romance language (French). However, the signal is strong enough to yield a clear phylogenetic tree of the source languages. Interference is thus revealed to be an extremely powerful force, partially resistant to intermediate distortions. 6 Analysis We demonstrated that source-language traces are dominant in translation products to an extent that facilitates reconstruction of the history of the source languages. We now inspect some of these phenomena in more detail to better understand the prominent characteristics of interference. For each phenomenon, we computed the frequencies of patterns that reflect it in texts translated to English from each individual language, and averaged the measures over each language family (Germanic, Romance, and Balto-Slavic). Figure 4 depicts the results. 6.1 Definite articles Languages vary greatly in their use of articles. Like other Germanic languages, English has both definite (‘a’) and indefinite (‘the’) articles. However, many languages only have definite articles and some only have indefinite articles. Romance languages, and in particular the five Romance languages of our dataset, have definite articles that can sometimes be omitted, but not as commonly as in English. Balto-Slavic languages typically do not have any articles. Mastering the use of articles in English is notoriously hard, leading to errors in non-native speakers (Han et al., 2006). For example, native speakers of Slavic languages tend to overuse definite articles in German (Hirschmann et al., 2013). Similarly, we expect translations from Balto-Slavic languages to overuse ‘the’. We computed the frequencies of ‘the’ in translations to English from each of the three language families. The results show a significant overuse of ‘the’ in translations from Balto-Slavic languages, and some overuse in translations from Romance languages. 6.2 Possessive constructions Languages also vary in the way they mark possession. English marks it in three ways: with the clitic ‘’s’ (‘the guest’s room’), with a prepositional phrase containing ‘of’ (‘the room of the guest’), and, like in other Germanic languages, with noun compounds (‘guest room’). Compounds are considerably less frequent in Romance languages 536 definite articles (per 10 tokens) ‘of’ constructions (per 25 tokens) verb–particle (per 250 tokens) perfect (per 100 tokens) progressive (per 500 tokens) 0 0.2 0.4 0.6 0.8 Frequency Germanic Romance Balto-Slavic Figure 4: Frequencies reflecting various linguistic phenomena (Sections 6.1– 6.4) in English translations (Swan and Smith, 2001); Balto-Slavic indicates possession using case-marking. Languages also vary with respect to whether or not possession is head-marked. In Balto-Slavic languages, the genitive case is head-marked, which reverses the order of the two nouns with respect to the common English ‘’s’ construction. Since copying word order, if possible across languages, is one of the major features of interference (Eetemadi and Toutanova, 2014), we anticipated that Balto-Slavic languages will exhibit the highest rate of noun-‘of’-NP constructions. This would be followed by Romance languages, in which this construction is highly common, and then by Germanic languages, where noun compounds can often be copied as such. The results are consistent with our expectations. 6.3 Verb-particle constructions Verb-particle constructions (e.g., ‘turn down’) consist of verbs that combine with a particle to create a new meaning (Deh´e et al., 2002). Such constructions are much more common in Germanic languages (Iacobini and Masini, 2005), hence we expect to encounter their equivalents in English translations more frequently. We computed the frequencies of these constructions in the data; the results show a clear overuse of verb-particle constructions in translations from Germanic, and an underuse of such constructions in translations from Balto-Slavic. 6.4 Tense and aspect Tense and aspect are expressed in different ways across languages. English, like other Germanic languages, uses a full system of aspectual distinctions, expressed via perfect and progressive forms (with the auxiliary verbs ‘have’ or ‘be’). Balto-Slavic, in contrast, has no such system, and the distinction is marked lexically, by having two types of verbs. Romance languages are in between, with both lexical and grammatical distinctions. We computed the frequencies of perfect forms (defined as the auxiliary ‘have’ followed by the past participle form), and the progressive forms (defined as the auxiliary ‘be’ plus a present participle form). Indeed, Germanic overuses the perfect aspect significantly; the use of the progressive aspect also varies across language families, exhibiting the lowest frequency in translations from Balto-Slavic. 7 Conclusion Translations may be considered distortions of the original text, but this distortion is far from random. It depicts a very clear picture, reflecting language typology to the extent that disregarding the sources altogether, a phylogenetic tree can be reconstructed from a monolingual corpus consisting of multiple translations. This holds for the product of highly professional translators, who conform to a common standard, and whose products are edited by native speakers, like themselves. It even holds after two phases of translations. We are presently trying to extend these results to translations in a different domain (literary texts) into a very different language (Hebrew). Postulated universals in linguistics (Greenberg, 1963) were confronted with much contradicting evidence in recent years (Evans and Levinson, 2009), and the long quest for translation universals (Mauranen and Kujam¨aki, 2004) should now be viewed in light of our finding: more than anything else, translations are typified by interference. This does not undermine the force of translation universals: we demonstrated how explicitation, in the form of cohesive markers, can help identify translations. It may be possible to define classi537 fiers implementing other universal facets of translation, e.g., simplification, which will yield good separation between O and T. However, explicitation fails in the reproduction of language typology, whereas interference-based features produce trees of considerable quality. Remarkably, translations to contemporary English and French capture part of the millenniumold history of the source languages from which the translations were made. Our trees reflect some of the historical connections among the languages, but of course they are related in other ways, too (whether incidental, areal, etc.). This may explain the case of Romanian in our reconstructed trees: it has been isolated for many years from other Romance languages and was under heavy influence from Balto-Slavic languages. Very little research has been done in historical linguistics on how translations impact the evolvement of languages. The major trends relate to loan translations (Jahr, 1999), or the impact of canonical texts, such as Luther’s translation of the Bible to German (Russ, 1994) or the case of the King James translation to English (Crystal, 2010). It has been attested that for certain languages, up to 30% of published materials are mediated through translation (Pym and Chrupała, 2005). Given the fingerprints left on target language texts, translations very likely play a role in language change. We leave this as a direction for future research. Acknowledgements We wish to thank the three ACL anonymous reviewers for their constructive feedback. We are grateful to Sergiu Nisioi and Oren Weimann for their advice and helpful suggestions. We are also thankful to Yonatan Belinkov and Michael Katz for insightful and valuable comments. References Ehud Alexander Avner, Noam Ordan, and Shuly Wintner. 2016. Identifying translationese at the word and sub-word level. Digital Scholarship in the Humanities 31(1):30–54. http://dx.doi.org/10.1093/llc/fqu047. Mona Baker. 1993. Corpus linguistics and translation studies: Implications and applications. In Mona Baker, Gill Francis, and Elena Tognini-Bonelli, editors, Text and technology: in honour of John Sinclair, John Benjamins, Amsterdam, pages 233–252. Marco Baroni and Silvia Bernardini. 2006. A new approach to the study of Translationese: Machinelearning the difference between original and translated text. Literary and Linguistic Computing 21(3):259–274. Yevgeni Berzak, Roi Reichart, and Boris Katz. 2014. Reconstructing native language typology from foreign language usage. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning. pages 21–29. http://aclweb.org/anthology/W/W14/W141603.pdf. Shoshana Blum-Kulka. 1986. Shifts of cohesion and coherence in translation. In Juliane House and Shoshana Blum-Kulka, editors, Interlingual and intercultural communication Discourse and cognition in translation and second language acquisition studies, Gunter Narr Verlag, volume 35, pages 17–35. Shoshana Blum-Kulka and Eddie A. Levenston. 1983. Universals of lexical simplification. In Claus Faerch and Gabriele Kasper, editors, Strategies in Interlanguage Communication, Longman, pages 119–139. Alix Boc, Anna Maria Di Sciullo, and Vladimir Makarenkov. 2010. Classification of the IndoEuropean languages using a phylogenetic network approach. In Hermann Locarek-Junge and Claus Weihs, editors, Classification as a Tool for Research: Proceedings of the 11th IFCS Biennial Conference and 33rd Annual Conference of the Gesellschaft f¨ur Klassifikation e.V., Dresden, March 13-18, 2009. Springer Berlin Heidelberg, Berlin, Heidelberg, pages 647–655. Sebastian B¨ocker, Stefan Canzar, and Gunnar W Klau. 2013. The generalized Robinson-Foulds metric. In International Workshop on Algorithms in Bioinformatics. Springer, pages 156–169. David Crystal. 2010. Begat: The King James Bible and the English Language. Oxford University Press. Nicole Deh´e, Ray Jackendoff, Andrew McIntyre, and Silke Urban, editors. 2002. Verb-particle Explorations. Interface explorations. Mouton de Gruyter. Isidore Dyen, Joseph B. Kruskal, and Paul Black. 1992. An Indoeuropean classification. a lexicostatistical experiment. Transactions of the American Philosophical Society 82(5):iii–132. Sauleh Eetemadi and Kristina Toutanova. 2014. Asymmetric features of human generated translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 159–164. http://www.aclweb.org/anthology/D14-1018. T. Mark Ellison and Simon Kirby. 2006. Measuring language divergence by intra-lexical comparison. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th 538 Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 273–280. https://doi.org/10.3115/1220175.1220210. Nicholas Evans and Stephen Levinson. 2009. The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences 32(5):429–494. William Frawley. 1984. Prolegomenon to a theory of translation. In William Frawley, editor, Translation. Literary, Linguistic and Philosophical Perspectives, University of Delaware Press, Newark, pages 159– 175. Martin Gellerstam. 1986. Translationese in Swedish novels translated from English. In Lars Wollin and Hans Lindquist, editors, Translation Studies in Scandinavia, CWK Gleerup, Lund, pages 88–95. Russell D. Gray and Quentin D. Atkinson. 2003. Language-tree divergence times support the Anatolian theory of Indo-European origin. Nature 426:435–439. Joseph H. Greenberg, editor. 1963. Universals of Human Language. MIT Press, Cambridge, Mass. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: an update. SIGKDD Explorations 11(1):10–18. https://doi.org/10.1145/1656274.1656278. Na-Rae Han, Martin Chodorow, and Claudia Leacock. 2006. Detecting errors in English article usage by non-native speakers. Natural Language Engineering 12(02):115–129. Katherine A Heller and Zoubin Ghahramani. 2005. Bayesian hierarchical clustering. In Proceedings of the 22nd international conference on Machine learning. ACM, pages 297–304. Eli Hinkel. 2001. Matters of cohesion in L2 academic texts. Applied Language Learning 12(2):111–132. Hagen Hirschmann, Anke L¨udeling, Ines Rehbein, Marc Reznicek, and Amir Zeldes. 2013. Underuse of syntactic categories in Falko. a case study on modification. In Sylviane Granger, Ga¨etanelle Gilquin, and Fanny Meunier, editors, 20 Years of Learner Corpus Research. Looking Back, Moving Ahead., Presses Universitaires de Louvain, Louvain la Neuve, pages 223–234. Claudio Iacobini and Francesca Masini. 2005. Verbparticle constructions and prefixed verbs in Italian: typology, diachrony and semantics. In Mediterranean Morphology Meetings. volume 5, pages 157–184. Iustina Ilisei, Diana Inkpen, Gloria Corpas Pastor, and Ruslan Mitkov. 2010. Identification of translationese: A machine learning approach. In Alexander F. Gelbukh, editor, Proceedings of CICLing-2010: 11th International Conference on Computational Linguistics and Intelligent Text Processing. Springer, volume 6008 of Lecture Notes in Computer Science, pages 503–511. http://dx.doi.org/10.1007/978-3-642-12116-6. Ernst H˚akon Jahr. 1999. Language change: advances in historical sociolinguistics, volume 114. Walter de Gruyter. S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, and K.R.K. Murthy. 2001. Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation 13(3):637–649. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. MT Summit. Moshe Koppel and Noam Ordan. 2011. Translationese and its dialects. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 1318–1326. http://www.aclweb.org/anthology/P11-1132. Mary K Kuhner and Joseph Felsenstein. 1994. A simulation comparison of phylogeny algorithms under equal and unequal evolutionary rates. Molecular Biology and Evolution 11(3):459–468. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Baltimore, Maryland, pages 55–60. http://www.aclweb.org/anthology/P/P14/P14-5010. Anna Mauranen and Pekka Kujam¨aki, editors. 2004. Translation universals: Do they exist?. John Benjamins. Ryo Nagata and Edward W. D. Whittaker. 2013. Reconstructing an Indo-European family tree from non-native English texts. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. pages 1137–1147. http://aclweb.org/anthology/P/P13/P13-1112.pdf. Luay Nakhleh, Don Ringe, and Tandy Warnow. 2005a. Perfect phylogenetic networks: A new methodology for reconstructing the evolutionary history of natural languages. Language 81(2):382–420. Luay Nakhleh, Tandy Warnow, Don Ringe, and Steven N. Evans. 2005b. A comparison of phylogenetic reconstruction methods on an Indo-European dataset. Transactions of the Philological Society 103(2):171–192. https://doi.org/10.1111/j.1467968X.2005.00149.x. 539 Javad Nouri and Roman Yangarber. 2016. Modeling language evolution with codes that utilize context and phonetic features. CoNLL 2016 page 136. Lin Øver˚as. 1998. In search of the third code: An investigation of norms in literary translation. Meta 43(4):557–570. Asya Pereltsvaig and Martin W. Lewis. 2015. The Indo-European Controversy. Cambridge University Press, Cambridge. Simone Pompei, Vittorio Loreto, and Francesca Tria. 2011. On the accuracy of language trees. PloS one 6(6):e20109. Anthony Pym. 2008. On Toury’s laws of how translators translate. BENJAMINS TRANSLATION LIBRARY 75:311. Anthony Pym and Grzegorz Chrupała. 2005. The quantitative analysis of translation flows in the age of an international language. In Albert Branchadell and Lovell M. West, editors, Less Translated Languages, John Benjamins, Amsterdam, pages 27–38. Ella Rabinovich and Shuly Wintner. 2015. Unsupervised identification of translationese. Transactions of the Association for Computational Linguistics 3:419–432. Ella Rabinovich, Shuly Wintner, and Ofek Luis Lewinsohn. 2015. The Haifa corpus of translationese. Unpublished manuscript. http://arxiv.org/abs/1509.03611. Kateˇrina Rexov´a, Daniel Frynta, and Jan Zrzav`y. 2003. Cladistic analysis of languages: Indo-European classification based on lexicostatistical data. Cladistics 19(2):120–127. Kate˘rina Rexov´a, Daniel Frynta, and Jan Zrzav´y. 2003. Cladistic analysis of languages: IndoEuropean classification based on lexicostatistical data. Cladistics-the International Journal of the Willi Hennig Society 19(2):120–127. Don Ringe, Tandy Warnow, and Ann Taylor. 2002. Indo-European and computational cladistics. Transactions of the Philological Society 100(1):59–129. https://doi.org/10.1111/1467-968X.00091. David F Robinson and Leslie R Foulds. 1981. Comparison of phylogenetic trees. Mathematical biosciences 53(1):131–147. Charles VJ Russ. 1994. The German language today: A linguistic introduction. Psychology Press. Maurizio Serva and Filippo Petroni. 2008. IndoEuropean languages tree by Levenshtein distance. Europhysics Letters 81(6):68005. http://stacks.iop.org/0295-5075/81/i=6/a=68005. George Steiner. 1975. After Babel. University Press. Michael Swan and Bernard Smith. 2001. Learner English. Cambridge University Press, Cambridge, second edition. Yee Whye Teh, Hal Daum´e III, and Daniel Roy. 2009. Bayesian agglomerative clustering with coalescents. arXiv preprint arXiv:0907.0781 . Joel Tetreault, Daniel Blanchard, and Aoife Cahill. 2013. A report on the first native language identification shared task. In Proceedings of the Eighth Workshop on Building Educational Applications Using NLP. Association for Computational Linguistics. Gideon Toury. 1980. In Search of a Theory of Translation. The Porter Institute for Poetics and Semiotics, Tel Aviv University, Tel Aviv. Gideon Toury. 1995. Descriptive Translation Studies and beyond. John Benjamins, Amsterdam / Philadelphia. Yulia Tsvetkov, Naama Twitto, Nathan Schneider, Noam Ordan, Manaal Faruqui, Victor Chahuneau, Shuly Wintner, and Chris Dyer. 2013. Identifying the L1 of non-native writers: the CMU-Haifa system. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, pages 279–287. http://www.aclweb.org/anthology/W13-1736. Hans van Halteren. 2008. Source language markers in EUROPARL translations. In Donia Scott and Hans Uszkoreit, editors, COLING 2008, 22nd International Conference on Computational Linguistics, Proceedings of the Conference, 18-22 August 2008, Manchester, UK. pages 937–944. http://www.aclweb.org/anthology/C08-1118. Ria Vanderauwerea. 1985. Dutch novels translated into English: the transformation of a ‘minority’ literature. Rodopi, Amsterdam. Lawrence Venuti. 2008. The translator’s invisibility: A history of translation. Routledge. Vered Volansky, Noam Ordan, and Shuly Wintner. 2015. On the features of translationese. Digital Scholarship in the Humanities 30(1):98–118. Joe H Ward Jr. 1963. Hierarchical grouping to optimize an objective function. Journal of the American statistical association 58(301):236–244. Søren Wichmann and Anthony P Grant. 2012. Quantitative approaches to linguistic diversity: commemorating the centenary of the birth of Morris Swadesh, volume 46. John Benjamins Publishing. 540
2017
49
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 44–55 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1005 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 44–55 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1005 Learning Structured Natural Language Representations for Semantic Parsing Jianpeng Cheng† Siva Reddy† Vijay Saraswat‡ and Mirella Lapata† †School of Informatics, University of Edinburgh ‡IBM T.J. Watson Research {jianpeng.cheng,siva.reddy}@ed.ac.uk, [email protected], [email protected] Abstract We introduce a neural semantic parser which is interpretable and scalable. Our model converts natural language utterances to intermediate, domain-general natural language representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains. The semantic parser is trained end-to-end using annotated logical forms or their denotations. We achieve the state of the art on SPADES and GRAPHQUESTIONS and obtain competitive results on GEOQUERY and WEBQUESTIONS. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from linguistically motivated ones.1 1 Introduction Semantic parsing is the task of mapping natural language utterances to machine interpretable meaning representations. Despite differences in the choice of meaning representation and model structure, most existing work conceptualizes semantic parsing following two main approaches. Under the first approach, an utterance is parsed and grounded to a meaning representation directly via learning a task-specific grammar (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2006; Kwiatkowksi et al., 2010; Liang et al., 2011; Berant et al., 2013; Flanigan et al., 2014; Pasupat and Liang, 2015; Groschwitz et al., 2015). Under the second approach, the utterance is first parsed to an intermediate task-independent representation tied to a syntactic parser and then mapped to a grounded 1Our code is available at https://github.com/ cheng6076/scanner. representation (Kwiatkowski et al., 2013; Reddy et al., 2016, 2014; Krishnamurthy and Mitchell, 2015; Gardner and Krishnamurthy, 2017). A merit of the two-stage approach is that it creates reusable intermediate interpretations, which potentially enables the handling of unseen words and knowledge transfer across domains (Bender et al., 2015). The successful application of encoder-decoder models (Bahdanau et al., 2015; Sutskever et al., 2014) to a variety of NLP tasks has provided strong impetus to treat semantic parsing as a sequence transduction problem where an utterance is mapped to a target meaning representation in string format (Dong and Lapata, 2016; Jia and Liang, 2016; Koˇcisk´y et al., 2016). Such models still fall under the first approach, however, in contrast to previous work (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Liang et al., 2011) they reduce the need for domain-specific assumptions, grammar learning, and more generally extensive feature engineering. But this modeling flexibility comes at a cost since it is no longer possible to interpret how meaning composition is performed. Such knowledge plays a critical role in understand modeling limitations so as to build better semantic parsers. Moreover, without any taskspecific prior knowledge, the learning problem is fairly unconstrained, both in terms of the possible derivations to consider and in terms of the target output which can be ill-formed (e.g., with extra or missing brackets). In this work, we propose a neural semantic parser that alleviates the aforementioned problems. Our model falls under the second class of approaches where utterances are first mapped to an intermediate representation containing natural language predicates. However, rather than using an external parser (Reddy et al., 2014, 2016) or manually specified CCG grammars (Kwiatkowski et al., 2013), we induce intermediate representations in the form of predicate-argument structures 44 from data. This is achieved with a transition-based approach which by design yields recursive semantic structures, avoiding the problem of generating ill-formed meaning representations. Compared to most existing semantic parsers which employ a CKY style bottom-up parsing strategy (Krishnamurthy and Mitchell, 2012; Cai and Yates, 2013; Berant et al., 2013; Berant and Liang, 2014), the transition-based approach we proposed does not require feature decomposition over structures and thereby enables the exploration of rich, non-local features. The output of the transition system is then grounded (e.g., to a knowledge base) with a neural mapping model under the assumption that grounded and ungrounded structures are isomorphic.2 As a result, we obtain a neural model that jointly learns to parse natural language semantics and induce a lexicon that helps grounding. The whole network is trained end-to-end on natural language utterances paired with annotated logical forms or their denotations. We conduct experiments on four datasets, including GEOQUERY (which has logical forms; Zelle and Mooney 1996), SPADES (Bisk et al., 2016), WEBQUESTIONS (Berant et al., 2013), and GRAPHQUESTIONS (Su et al., 2016) (which have denotations). Our semantic parser achieves the state of the art on SPADES and GRAPHQUESTIONS, while obtaining competitive results on GEOQUERY and WEBQUESTIONS. A side-product of our modeling framework is that the induced intermediate representations can contribute to rationalizing neural predictions (Lei et al., 2016). Specifically, they can shed light on the kinds of representations (especially predicates) useful for semantic parsing. Evaluation of the induced predicate-argument relations against syntax-based ones reveals that they are interpretable and meaningful compared to heuristic baselines, but they sometimes deviate from linguistic conventions. 2 Preliminaries Problem Formulation Let K denote a knowledge base or more generally a reasoning system, and x an utterance paired with a grounded meaning representation G or its denotation y. Our problem is to learn a semantic parser that maps x to G via an intermediate ungrounded representation U. When G is executed against K, it outputs denota2We discuss the merits and limitations of this assumption in Section 5 Predicate Usage Sub-categories answer denotation wrapper — type entity type checking stateid, cityid, riverid, etc. all querying for an entire set of entities — aggregation one-argument meta predicates for sets count, largest, smallest, etc. logical connectors two-argument meta predicates for sets intersect, union, exclude Table 1: List of domain-general predicates. tion y. Grounded Meaning Representation We represent grounded meaning representations in FunQL (Kate et al., 2005) amongst many other alternatives such as lambda calculus (Zettlemoyer and Collins, 2005), λ-DCS (Liang, 2013) or graph queries (Holzschuher and Peinl, 2013; Harris et al., 2013). FunQL is a variable-free query language, where each predicate is treated as a function symbol that modifies an argument list. For example, the FunQL representation for the utterance which states do not border texas is: answer(exclude(state(all), next to(texas))) where next to is a domain-specific binary predicate that takes one argument (i.e., the entity texas) and returns a set of entities (e.g., the states bordering Texas) as its denotation. all is a special predicate that returns a collection of entities. exclude is a predicate that returns the difference between two input sets. An advantage of FunQL is that the resulting s-expression encodes semantic compositionality and derivation of the logical forms. This property makes FunQL logical forms convenient to be predicted with recurrent neural networks (Vinyals et al., 2015; Choe and Charniak, 2016; Dyer et al., 2016). However, FunQL is less expressive than lambda calculus, partially due to the elimination of variables. A more compact logical formulation which our method also applies to is λ-DCS (Liang, 2013). In the absence of anaphora and composite binary predicates, conversion algorithms exist between FunQL and λ-DCS. However, we leave this to future work. Ungrounded Meaning Representation We also use FunQL to express ungrounded meaning representations. The latter consist primarily of natural language predicates and domain-general predicates. Assuming for simplicity that domaingeneral predicates share the same vocabulary 45 in ungrounded and grounded representations, the ungrounded representation for the example utterance is: answer(exclude(states(all), border(texas))) where states and border are natural language predicates. In this work we consider five types of domain-general predicates illustrated in Table 1. Notice that domain-general predicates are often implicit, or represent extra-sentential knowledge. For example, the predicate all in the above utterance represents all states in the domain which are not mentioned in the utterance but are critical for working out the utterance denotation. Finally, note that for certain domain-general predicates, it also makes sense to extract natural language rationales (e.g., not is indicative for exclude). But we do not find this helpful in experiments. In this work we constrain ungrounded representations to be structurally isomorphic to grounded ones. In order to derive the target logical forms, all we have to do is replacing predicates in the ungrounded representations with symbols in the knowledge base. 3 Modeling In this section, we discuss our neural model which maps utterances to target logical forms. The semantic parsing task is decomposed in two stages: we first explain how an utterance is converted to an intermediate representation (Section 3.1), and then describe how it is grounded to a knowledge base (Section 3.2). 3.1 Generating Ungrounded Representations At this stage, utterances are mapped to intermediate representations with a transition-based algorithm. In general, the transition system generates the representation by following a derivation tree (which contains a set of applied rules) and some canonical generation order (e.g., depth-first). For FunQL, a simple solution exists since the representation itself encodes the derivation. Consider again answer(exclude(states(all), border(texas))) which is tree structured. Each predicate (e.g., border) can be visualized as a non-terminal node of the tree and each entity (e.g., texas) as a terminal. The predicate all is a special case which acts as a terminal directly. We can generate the tree with a top-down, depth first transition system reminiscent of recurrent neural network grammars (RNNGs; Dyer et al. 2016). Similar to RNNG, our algorithm uses a buffer to store input tokens in the utterance and a stack to store partially completed trees. A major difference in our semantic parsing scenario is that tokens in the buffer are not fetched in a sequential order or removed from the buffer. This is because the lexical alignment between an utterance and its semantic representation is hidden. Moreover, some predicates cannot be clearly anchored to a token span. Therefore, we allow the generation algorithm to pick tokens and combine logical forms in arbitrary orders, conditioning on the entire set of sentential features. Alternative solutions in the traditional semantic parsing literature include a floating chart parser (Pasupat and Liang, 2015) which allows to construct logical predicates out of thin air. Our transition system defines three actions, namely NT, TER, and RED, explained below. NT(X) generates a Non-Terminal predicate. This predicate is either a natural language expression such as border, or one of the domain-general predicates exemplified in Table 1 (e.g., exclude). The type of predicate is determined by the placeholder X and once generated, it is pushed onto the stack and represented as a non-terminal followed by an open bracket (e.g., ‘border(’). The open bracket will be closed by a reduce operation. TER(X) generates a TERminal entity or the special predicate all. Note that the terminal choice does not include variable (e.g., $0, $1), since FunQL is a variable-free language which sufficiently captures the semantics of the datasets we work with. The framework could be extended to generate directly acyclic graphs by incorporating variables with additional transition actions for handling variable mentions and co-reference. RED stands for REDuce and is used for subtree completion. It recursively pops elements from the stack until an open non-terminal node is encountered. The non-terminal is popped as well, after which a composite term representing the entire subtree, e.g., border(texas), is pushed back to the stack. If a RED action results in having no more open non-terminals left on the stack, the transition system terminates. Table 2 shows the transition actions used to generate our running example. The model generates the ungrounded representation U conditioned on utterance x by recursively calling one of the above three actions. Note that U is defined by a sequence of actions (denoted 46 Sentence: which states do not border texas Non-terminal symbols in buffer: which, states, do, not, border Terminal symbols in buffer: texas Stack Action NT choice TER choice NT answer answer ( NT exclude answer ( exclude ( NT states answer ( exclude ( states ( TER all answer ( exclude ( states ( all RED answer ( exclude ( states ( all ) NT border answer ( exclude ( states ( all ) , border ( TER texas answer ( exclude ( states ( all ) , border ( texas RED answer ( exclude ( states ( all ) , border ( texas ) RED answer ( exclude ( states ( all ) , border ( texas ) ) RED answer ( exclude ( states ( all ) , border ( texas ) ) ) Table 2: Actions taken by the transition system for generating the ungrounded meaning representation of the example utterance. Symbols in red indicate domain-general predicates. by a) and a sequence of term choices (denoted by u) as shown in Table 2. The conditional probability p(U|x) is factorized over time steps as: p(U|x) = p(a, u|x) = T Y t=1 p(at|a<t, x)p(ut|a<t, x)I(at̸=RED) (1) where I is an indicator function. To predict the actions of the transition system, we encode the input buffer with a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) and the output stack with a stack-LSTM (Dyer et al., 2015). At each time step, the model uses the representation of the transition system et to predict an action: p(at|a<t, x) ∝exp(Wa · et) (2) where et is the concatenation of the buffer representation bt and the stack representation st. While the stack representation st is easy to retrieve as the top state of the stack-LSTM, obtaining the buffer representation bt is more involved. This is because we do not have an explicit buffer representation due to the non-projectivity of semantic parsing. We therefore compute at each time step an adaptively weighted representation of bt (Bahdanau et al., 2015) conditioned on the stack representation st. This buffer representation is then concatenated with the stack representation to form the system representation et. When the predicted action is either NT or TER, an ungrounded term ut (either a predicate or an entity) needs to be chosen from the candidate list depending on the specific placeholder X. To select a domain-general term, we use the same representation of the transition system et to compute a probability distribution over candidate terms: p(uGENERAL t |a<t, x) ∝exp(Wp · et) (3) To choose a natural language term, we directly compute a probability distribution of all natural language terms (in the buffer) conditioned on the stack representation st and select the most relevant term (Jia and Liang, 2016): p(uNL t |a<t, x) ∝exp(st) (4) When the predicted action is RED, the completed subtree is composed into a single representation on the stack. For the choice of composition function, we use a single-layer neural network as in Dyer et al. (2015), which takes as input the concatenated representation of the predicate and argument of the subtree. 3.2 Generating Grounded Representations Since we constrain the network to learn ungrounded structures that are isomorphic to the target meaning representation, converting ungrounded representations to grounded ones becomes a simple lexical mapping problem. For simplicity, hereafter we do not differentiate natural language and domain-general predicates. To map an ungrounded term ut to a grounded term gt, we compute the conditional probability 47 of gt given ut with a bi-linear neural network: p(gt|ut) ∝exp ⃗ut · Wug · ⃗gt⊤ (5) where ⃗ut is the contextual representation of the ungrounded term given by the bidirectional LSTM, ⃗gt is the grounded term embedding, and Wug is the weight matrix. The above grounding step can be interpreted as learning a lexicon: the model exclusively relies on the intermediate representation U to predict the target meaning representation G without taking into account any additional features based on the utterance. In practice, U may provide sufficient contextual background for closed domain semantic parsing where an ungrounded predicate often maps to a single grounded predicate, but is a relatively impoverished representation for parsing large open-domain knowledge bases like Freebase. In this case, we additionally rely on a discriminative reranker which ranks the grounded representations derived from ungrounded representations (see Section 3.4). 3.3 Training Objective When the target meaning representation is available, we directly compare it against our predictions and back-propagate. When only denotations are available, we compare surrogate meaning representations against our predictions (Reddy et al., 2014). Surrogate representations are those with the correct denotations. When there exist multiple surrogate representations,3 we select one randomly and back-propagate. The global effect of the above update rule is close to maximizing the marginal likelihood of denotations, which differs from recent work on weakly-supervised semantic parsing based on reinforcement learning (Neelakantan et al., 2017). Consider utterance x with ungrounded meaning representation U, and grounded meaning representation G. Both U and G are defined with a sequence of transition actions (same for U and G) and a sequence of terms (different for U and G). Recall that a = [a1, · · · , an] denotes the transition action sequence defining U and G; let u = [u1, · · · , uk] denote the ungrounded terms (e.g., predicates), and g = [g1, · · · , gk] the grounded terms. We aim to maximize the likelihood of the grounded meaning representation p(G|x) over all training examples. This 3The average Freebase surrogate representations obtained with highest denotation match (F1) is 1.4. likelihood can be decomposed into the likelihood of the grounded action sequence p(a|x) and the grounded term sequence p(g|x), which we optimize separately. For the grounded action sequence (which by design is the same as the ungrounded action sequence and therefore the output of the transition system), we can directly maximize the log likelihood log p(a|x) for all examples: La = X x∈T log p(a|x) = X x∈T n X t=1 log p(at|x) (6) where T denotes examples in the training data. For the grounded term sequence g, since the intermediate ungrounded terms are latent, we maximize the expected log likelihood of the grounded terms P u [p(u|x) log p(g|u, x)] for all examples, which is a lower bound of the log likelihood log p(g|x): Lg = X x∈T X u [p(u|x) log p(g|u, x)] = X x∈T X u " p(u|x) k X t=1 log p(gt|ut) # (7) The final objective is the combination of La and Lg, denoted as LG = La + Lg. We optimize this objective with the method described in Lei et al. (2016). 3.4 Reranker As discussed above, for open domain semantic parsing, solely relying on the ungrounded representation would result in an impoverished model lacking sentential context useful for disambiguation decisions. For all Freebase experiments, we followed previous work (Berant et al., 2013; Berant and Liang, 2014; Reddy et al., 2014) in additionally training a discriminative ranker to re-rank grounded representations globally. The discriminative ranker is a maximumentropy model (Berant et al., 2013). The objective is to maximize the log likelihood of the correct answer y given x by summing over all grounded candidates G with denotation y (i.e.,[[G]]K = y): Ly = X (x,y)∈T log X [[G]]K=y p(G|x) (8) p(G|x) ∝exp{f(G, x)} (9) where f(G, x) is a feature function that maps pair (G, x) into a feature vector. We give details on the features we used in Section 4.2. 48 4 Experiments In this section, we verify empirically that our semantic parser derives useful meaning representations. We give details on the evaluation datasets and baselines used for comparison. We also describe implementation details and the features used in the discriminative ranker. 4.1 Datasets We evaluated our model on the following datasets which cover different domains, and use different types of training data, i.e., pairs of natural language utterances and grounded meanings or question-answer pairs. GEOQUERY (Zelle and Mooney, 1996) contains 880 questions and database queries about US geography. The utterances are compositional, but the language is simple and vocabulary size small. The majority of questions include at most one entity. SPADES (Bisk et al., 2016) contains 93,319 questions derived from CLUEWEB09 (Gabrilovich et al., 2013) sentences. Specifically, the questions were created by randomly removing an entity, thus producing sentence-denotation pairs (Reddy et al., 2014). The sentences include two or more entities and although they are not very compositional, they constitute a large-scale dataset for neural network training. WEBQUESTIONS (Berant et al., 2013) contains 5,810 question-answer pairs. Similar to SPADES, it is based on Freebase and the questions are not very compositional. However, they are real questions asked by people on the Web. Finally, GRAPHQUESTIONS (Su et al., 2016) contains 5,166 question-answer pairs which were created by showing 500 Freebase graph queries to Amazon Mechanical Turk workers and asking them to paraphrase them into natural language. 4.2 Implementation Details Amongst the four datasets described above, GEOQUERY has annotated logical forms which we directly use for training. For the other three datasets, we treat surrogate meaning representations which lead to the correct answer as gold standard. The surrogates were selected from a subset of candidate Freebase graphs, which were obtained by entity linking. Entity mentions in SPADES have been automatically annotated with Freebase entities (Gabrilovich et al., 2013). For WEBQUESTIONS and GRAPHQUESTIONS, we follow the procedure described in Reddy et al. (2016). We identify potential entity spans using seven handcrafted partof-speech patterns and associate them with Freebase entities obtained from the Freebase/KG API.4 We use a structured perceptron trained on the entities found in WEBQUESTIONS and GRAPHQUESTIONS to select the top 10 non-overlapping entity disambiguation possibilities. We treat each possibility as a candidate input utterance, and use the perceptron score as a feature in the discriminative reranker, thus leaving the final disambiguation to the semantic parser. Apart from the entity score, the discriminative ranker uses the following basic features. The first feature is the likelihood score of a grounded representation aggregating all intermediate representations. The second set of features include the embedding similarity between the relation and the utterance, as well as the similarity between the relation and the question words. The last set of features includes the answer type as indicated by the last word in the Freebase relation (Xu et al., 2016). We used the Adam optimizer for training with an initial learning rate of 0.001, two momentum parameters [0.99, 0.999], and batch size 1. The dimensions of the word embeddings, LSTM states, entity embeddings and relation embeddings are [50, 100, 100, 100]. The word embeddings were initialized with Glove embeddings (Pennington et al., 2014). All other embeddings were randomly initialized. 4.3 Results Experimental results on the four datasets are summarized in Tables 3–6. We present comparisons of our system which we call SCANNER (as a shorthand for SymboliC meANiNg rEpResentation) against a variety of models previously described in the literature. GEOQUERY results are shown in Table 5. The first block contains symbolic systems, whereas neural models are presented in the second block. We report accuracy which is defined as the proportion of the utterance that are correctly parsed to their gold standard logical forms. All previous neural systems (Dong and Lapata, 2016; Jia and Liang, 2016) treat semantic parsing as a sequence transduction problem and use LSTMs to directly map utterances to logical forms. SCANNER yields performance improvements over these 4http://developers.google.com/ freebase/ 49 Models F1 Berant et al. (2013) 35.7 Yao and Van Durme (2014) 33.0 Berant and Liang (2014) 39.9 Bast and Haussmann (2015) 49.4 Berant and Liang (2015) 49.7 Reddy et al. (2016) 50.3 Bordes et al. (2014) 39.2 Dong et al. (2015) 40.8 Yih et al. (2015) 52.5 Xu et al. (2016) 53.3 Neural Baseline 48.3 SCANNER 49.4 Table 3: WEBQUESTIONS results. Models F1 SEMPRE (Berant et al., 2013) 10.80 PARASEMPRE (Berant and Liang, 2014) 12.79 JACANA (Yao and Van Durme, 2014) 5.08 Neural Baseline 16.24 SCANNER 17.02 Table 4: GRAPHQUESTIONS results. Numbers for comparison systems are from Su et al. (2016). systems when using comparable data sources for training. Jia and Liang (2016) achieve better results with synthetic data that expands GEOQUERY; we could adopt their approach to improve model performance, however, we leave this to future work. Table 6 reports SCANNER’s performance on SPADES. For all Freebase related datasets we use average F1 (Berant et al., 2013) as our evaluation metric. Previous work on this dataset has used a semantic parsing framework similar to ours where natural language is converted to an intermediate syntactic representation and then grounded to Freebase. Specifically, Bisk et al. (2016) evaluate the effectiveness of four different CCG parsers on the semantic parsing task when varying the amount of supervision required. As can be seen, SCANNER outperforms all CCG variants (from unsupervised to fully supervised) without having access to any manually annotated derivations or lexicons. For fair comparison, we also built a neural baseline that encodes an utterance with a recurrent neural network and then predicts a grounded meaning representation directly (Ture and Jojic, 2016; Yih et al., 2016). Again, we observe that SCANNER outperforms this baseline. Results on WEBQUESTIONS are summarized in Table 3. SCANNER obtains performance on par with the best symbolic systems (see the first block in the table). It is important to note that Bast and Haussmann (2015) develop a question answering system, which contrary to ours canModels Accuracy Zettlemoyer and Collins (2005) 79.3 Zettlemoyer and Collins (2007) 86.1 Kwiatkowksi et al. (2010) 87.9 Kwiatkowski et al. (2011) 88.6 Kwiatkowski et al. (2013) 88.0 Zhao and Huang (2015) 88.9 Liang et al. (2011) 91.1 Dong and Lapata (2016) 84.6 Jia and Liang (2016) 85.0 Jia and Liang (2016) with extra data 89.1 SCANNER 86.7 Table 5: GEOQUERY results. Models F1 Unsupervised CCG (Bisk et al., 2016) 24.8 Semi-supervised CCG (Bisk et al., 2016) 28.4 Neural baseline 28.6 Supervised CCG (Bisk et al., 2016) 30.9 Rule-based system (Bisk et al., 2016) 31.4 SCANNER 31.5 Table 6: SPADES results. not produce meaning representations whereas Berant and Liang (2015) propose a sophisticated agenda-based parser which is trained borrowing ideas from imitation learning. SCANNER is conceptually similar to Reddy et al. (2016) who also learn a semantic parser via intermediate representations which they generate based on the output of a dependency parser. SCANNER performs competitively despite not having access to any linguistically-informed syntactic structures. The second block in Table 3 reports the results of several neural systems. Xu et al. (2016) represent the state of the art on WEBQUESTIONS. Their system uses Wikipedia to prune out erroneous candidate answers extracted from Freebase. Our model would also benefit from a similar post-processing step. As in previous experiments, SCANNER outperforms the neural baseline, too. Finally, Table 4 presents our results on GRAPHQUESTIONS. We report F1 for SCANNER, the neural baseline model, and three symbolic systems presented in Su et al. (2016). SCANNER achieves a new state of the art on this dataset with a gain of 4.23 F1 points over the best previously reported model. 4.4 Analysis of Intermediate Representations Since a central feature of our parser is that it learns intermediate representations with natural language predicates, we conducted additional experiments in order to inspect their quality. For GEOQUERY 50 Metrics Accuracy Exact match 79.3 Structure match 89.6 Token match 96.5 Table 7: GEOQUERY evaluation of ungrounded meaning representations. We report accuracy against a manually created gold standard. which contains only 280 test examples, we manually annotated intermediate representations for the test instances and evaluated the learned representations against them. The experimental setup aims to shows how humans can participate in improving the semantic parser with feedback at the intermediate stage. In terms of evaluation, we use three metrics shown in Table 7. The first row shows the percentage of exact matches between the predicted representations and the human annotations. The second row refers to the percentage of structure matches, where the predicted representations have the same structure as the human annotations, but may not use the same lexical terms. Among structurally correct predictions, we additionally compute how many tokens are correct, as shown in the third row. As can be seen, the induced meaning representations overlap to a large extent with the human gold standard. We also evaluated the intermediate representations created by SCANNER on the other three (Freebase) datasets. Since creating a manual gold standard for these large datasets is time-consuming, we compared the induced representations against the output of a syntactic parser. Specifically, we converted the questions to event-argument structures with EASYCCG (Lewis and Steedman, 2014), a high coverage and high accuracy CCG parser. EASYCCG extracts predicate-argument structures with a labeled F-score of 83.37%. For further comparison, we built a simple baseline which identifies predicates based on the output of the Stanford POStagger (Manning et al., 2014) following the ordering VBD ≫VBN ≫VB ≫VBP ≫VBZ ≫MD. As shown in Table 8, on SPADES and WEBQUESTIONS, the predicates learned by our model match the output of EASYCCG more closely than the heuristic baseline. But for GRAPHQUESTIONS which contains more compositional questions, the mismatch is higher. However, since the key idea of our model is to capture salient meaning for the task at hand rather than strictly obey syntax, we would not expect the Dataset SCANNER Baseline SPADES 51.2 45.5 –conj (1422) 56.1 66.4 –control (132) 28.3 40.5 –pp (3489) 46.2 23.1 –subord (76) 37.9 52.9 WEBQUESTIONS 42.1 25.5 GRAPHQUESTIONS 11.9 15.3 Table 8: Evaluation of predicates induced by SCANNER against EASYCCG. We report F1(%) across datasets. For SPADES, we also provide a breakdown for various utterance types. predicates induced by our system to entirely agree with those produced by the syntactic parser. To further analyze how the learned predicates differ from syntax-based ones, we grouped utterances in SPADES into four types of linguistic constructions: coordination (conj), control and raising (control), prepositional phrase attachment (pp), and subordinate clauses (subord). Table 8 also shows the breakdown of matching scores per linguistic construction, with the number of utterances in each type. In Table 9, we provide examples of predicates identified by SCANNER, indicating whether they agree or not with the output of EASYCCG. As a reminder, the task in SPADES is to predict the entity masked by a blank symbol ( ). As can be seen in Table 8, the matching score is relatively high for utterances involving coordination and prepositional phrase attachments. The model will often identify informative predicates (e.g., nouns) which do not necessarily agree with linguistic intuition. For example, in the utterance wilhelm maybach and his son started maybach in 1909 (see Table 9), SCANNER identifies the predicateargument structure son(wilhelm maybach) rather than started(wilhelm maybach). We also observed that the model struggles with control and subordinate constructions. It has difficulty distinguishing control from raising predicates as exemplified in the utterance ceo john thain agreed to leave from Table 9, where it identifies the raising predicate agreed. For subordinate clauses, SCANNER tends to take shortcuts identifying as predicates words closest to the blank symbol. 5 Discussion We presented a neural semantic parser which converts natural language utterances to grounded meaning representations via intermediate predicate-argument structures. Our model 51 conj the boeing company was founded in 1916 and is headquartered in , illinois . nstar was founded in 1886 and is based in boston , . the is owned and operated by zuffa , llc , headquarted in las vegas , nevada . hugh attended and then shifted to uppingham school in england . was incorporated in 1947 and is based in new york city . the ifbb was formed in 1946 by president ben weider and his brother . wilhelm maybach and his son started maybach in 1909 . was founded in 1996 and is headquartered in chicago . control threatened to kidnap russ . has also been confirmed to play captain haddock . hoffenberg decided to leave . is reportedly trying to get impregnated by djimon now . for right now , are inclined to trust obama to do just that . agreed to purchase wachovia corp . ceo john thain agreed to leave . so nick decided to create . salva later went on to make the non clown-based horror . eddie dumped debbie to marry when carrie was 2 . pp is the home of the university of tennessee . chu is currently a physics professor at . youtube is based in , near san francisco , california . mathematica is a product of . jobs will retire from . the nab is a strong advocacy group in . this one starred robert reed , known mostly as . is positively frightening as detective bud white . subord the is a national testing board that is based in toronto . is a corporation that is wholly owned by the city of edmonton . unborn is a scary movie that stars . ’s third wife was actress melina mercouri , who died in 1994 . sure , there were who liked the shah . founded the , which is now also a designated terrorist group . is an online bank that ebay owns . zoya akhtar is a director , who has directed the upcoming movie . imelda staunton , who plays , is genius . is the important president that american ever had . plus mitt romney is the worst governor that has had . Table 9: Informative predicates identified by SCANNER in various types of utterances. Yellow predicates were identified by both SCANNER and EASYCCG, red predicates by SCANNER alone, and green predicates by EASYCCG alone. essentially jointly learns how to parse natural language semantics and the lexicons that help grounding. Compared to previous neural semantic parsers, our model is more interpretable as the intermediate structures are useful for inspecting what the model has learned and whether it matches linguistic intuition. An assumption our model imposes is that ungrounded and grounded representations are structurally isomorphic. An advantage of this assumption is that tokens in the ungrounded and grounded representations are strictly aligned. This allows the neural network to focus on parsing and lexical mapping, sidestepping the challenging structure mapping problem which would result in a larger search space and higher variance. On the negative side, the structural isomorphism assumption restricts the expressiveness of the model, especially since one of the main benefits of adopting a two-stage parser is the potential of capturing domain-independent semantic information via the intermediate representation. While it would be challenging to handle drastically non-isomorphic structures in the current model, it is possible to perform local structure matching, i.e., when the mapping between natural language and domainspecific predicates is many-to-one or one-to-many. For instance, Freebase does not contain a relation representing daughter, using instead two relations representing female and child. Previous work (Kwiatkowski et al., 2013) models such cases by introducing collapsing (for many-to-one mapping) and expansion (for one-to-many mapping) operators. Within our current framework, these two types of structural mismatches can be handled with semi-Markov assumptions (Sarawagi and Cohen, 2005; Kong et al., 2016) in the parsing (i.e., predicate selection) and the grounding steps, respectively. Aside from relaxing strict isomorphism, we would also like to perform crossdomain semantic parsing where the first stage of the semantic parser is shared across domains. Acknowledgments We would like to thank three anonymous reviewers, members of the Edinburgh ILCC and the IBM Watson, and Abulhair Saparov for feedback. The support of the European Research Council under award number 681760 “Translating Multiple Modalities into Text” is gratefully acknowledged. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly 52 learning to align and translate. In Proceedings of ICLR 2015. San Diego, California. Hannah Bast and Elmar Haussmann. 2015. More accurate question answering on Freebase. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. ACM, pages 1431–1440. Emily M Bender, Dan Flickinger, Stephan Oepen, Woodley Packard, and Ann Copestake. 2015. Layers of interpretation: On grammar and compositionality. In Proceedings of the 11th International Conference on Computational Semantics. London, UK, pages 239–249. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, Washington, pages 1533– 1544. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Baltimore, Maryland, pages 1415–1425. Jonathan Berant and Percy Liang. 2015. Imitation learning of agenda-based semantic parsers. Transactions of the Association for Computational Linguistics 3:545–558. Yonatan Bisk, Siva Reddy, John Blitzer, Julia Hockenmaier, and Mark Steedman. 2016. Evaluating induced CCG parsers on grounded semantic parsing. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas, pages 2022–2027. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar, pages 615–620. Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Sofia, Bulgaria, pages 423–433. Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas, pages 2331–2336. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany, pages 33–43. Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2015. Question answering over Freebase with multicolumn convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China, pages 260–269. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China, pages 334–343. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego, California, pages 199–209. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Baltimore, Maryland, pages 1426–1436. Evgeniy Gabrilovich, Michael Ringgaard, and Amarnag Subramanya. 2013. FACC1: Freebase annotation of ClueWeb corpora, version 1 (release date 2013-06-26, format version 1, correction level 0) . Matt Gardner and Jayant Krishnamurthy. 2017. OpenVocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge. In Proceedings of the 31st AAAI Conference on Artificial Intelligence. San Francisco, California, pages 3195– 3201. Jonas Groschwitz, Alexander Koller, and Christoph Teichmann. 2015. Graph parsing with s-graph grammars. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China, pages 1481–1490. Steve Harris, Andy Seaborne, and Eric Prud’hommeaux. 2013. SPARQL 1.1 query language. W3C recommendation 21(10). Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Florian Holzschuher and Ren´e Peinl. 2013. Performance of graph query languages: comparison of 53 cypher, gremlin and native access in Neo4j. In Proceedings of the Joint EDBT/ICDT 2013 Workshops. ACM, pages 195–204. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany, pages 12–22. Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to Transform Natural to Formal Languages. In Proceedings for the 20th National Conference on Artificial Intelligence. Pittsburgh, Pennsylvania, pages 1062–1068. Lingpeng Kong, Chris Dyer, and Noah A Smith. 2016. Segmental recurrent neural networks. In Proceedings of ICLR 2016. San Juan, Puerto Rico. Tom´aˇs Koˇcisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas, pages 1078–1087. Jayant Krishnamurthy and Tom Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Jeju Island, Korea, pages 754–765. Jayant Krishnamurthy and Tom M. Mitchell. 2015. Learning a Compositional Semantics for Freebase with an Open Predicate Vocabulary. Transactions of the Association for Computational Linguistics 3:257–270. Tom Kwiatkowksi, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Cambridge, MA, pages 1223–1233. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling Semantic Parsers with On-the-Fly Ontology Matching. In Proceedings of Empirical Methods on Natural Language Processing. pages 1545–1556. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Edinburgh, Scotland, pages 1512–1523. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas, pages 107– 117. Mike Lewis and Mark Steedman. 2014. A* CCG parsing with a supertag-factored model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar, pages 990–1000. Percy Liang. 2013. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408 . Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Portland, Oregon, pages 590–599. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Baltimore, Maryland, pages 55–60. Arvind Neelakantan, Quoc V Le, Martin Abadi, Andrew McCallum, and Dario Amodei. 2017. Learning a natural language interface with neural programmer. In Proceedings of ICLR 2017. Toulon, France. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China, pages 1470–1480. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar, pages 1532– 1543. Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without questionanswer pairs. Transactions of the Association for Computational Linguistics 2:377–392. Siva Reddy, Oscar T¨ackstr¨om, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, and Mirella Lapata. 2016. Transforming dependency structures to logical forms for semantic parsing. Transactions of the Association for Computational Linguistics 4:127–140. Sunita Sarawagi and William W Cohen. 2005. Semimarkov conditional random fields for information extraction. In Advances in Neural Information Processing Systems 17, MIT Press, pages 1185–1192. Yu Su, Huan Sun, Brian Sadler, Mudhakar Srivatsa, Izzeddin Gur, Zenghui Yan, and Xifeng Yan. 2016. On generating characteristic-rich question sets for 54 qa evaluation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas, pages 562–572. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, MIT Press, pages 3104–3112. Ferhan Ture and Oliver Jojic. 2016. Simple and effective question answering with recurrent neural networks. arXiv preprint arXiv:1606.05029 . Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems 28. MIT Press, pages 2773–2781. Yuk Wah Wong and Raymond Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference. New York City, USA, pages 439–446. Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016. Question answering on Freebase via relation extraction and textual evidence. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany, pages 2326– 2336. Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with Freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Baltimore, Maryland, pages 956–966. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China, pages 1321–1331. Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Berlin, Germany, pages 201–206. John M. Zelle and Raymond J. Mooney. 1996. Learning to Parse Database Queries Using Inductive Logic Programming. In Proceedings of the 13th National Conference on Artificial Intelligence. Portland, Oregon, pages 1050–1055. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Prague, Czech Republic, pages 678–687. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars. In Proceedings of 21st Conference in Uncertainilty in Artificial Intelligence. Edinburgh, Scotland, pages 658–666. Kai Zhao and Liang Huang. 2015. Type-driven incremental semantic parsing with polymorphism. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Denver, Colorado, pages 1416–1421. 55
2017
5
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 541–551 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1050 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 541–551 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1050 Predicting Native Language from Gaze Yevgeni Berzak MIT CSAIL [email protected] Chie Nakamura MIT Linguistics [email protected] Suzanne Flynn MIT Linguistics [email protected] Boris Katz MIT CSAIL [email protected] Abstract A fundamental question in language learning concerns the role of a speaker’s first language in second language acquisition. We present a novel methodology for studying this question: analysis of eye-movement patterns in second language reading of free-form text. Using this methodology, we demonstrate for the first time that the native language of English learners can be predicted from their gaze fixations when reading English. We provide analysis of classifier uncertainty and learned features, which indicates that differences in English reading are likely to be rooted in linguistic divergences across native languages. The presented framework complements production studies and offers new ground for advancing research on multilingualism.1 1 Introduction The influence of a speaker’s native language on learning and performance in a foreign language, also known as cross-linguistic transfer, has been studied for several decades in linguistics and psychology (Odlin, 1989; Martohardjono and Flynn, 1995; Jarvis and Pavlenko, 2008; Berkes and Flynn, 2012; Alonso, 2015). The growing availably of learner corpora has also sparked interest in cross-linguistic influence phenomena in NLP, where studies have explored the task of Native Language Identification (NLI) (Tetreault et al., 2013), as well as analysis of textual features in relation to the author’s native language (Jarvis and Crossley, 2012; Swanson and Charniak, 2013; Malmasi and Dras, 2014). Despite these advances, 1The experimental data collected in this study will be made publicly available. the extent and nature of first language influence in second language processing remains far from being established. Crucially, most prior work on this topic focused on production, while little is currently known about cross-linguistic influence in language comprehension. In this work, we present a novel framework for studying cross-linguistic influence in language comprehension using eyetracking for reading and free-form native English text. We collect and analyze English newswire reading data from 182 participants, including 145 English as Second Language (ESL) learners from four different native language backgrounds: Chinese, Japanese, Portuguese and Spanish, as well as 37 native English speakers. Each participant reads 156 English sentences, half of which are shared across all participants, and the remaining half are individual to each participant. All the sentences are manually annotated with part-of-speech (POS) tags and syntactic dependency trees. We then introduce the task of Native Language Identification from Reading (NLIR), which requires predicting a subject’s native language from gaze while reading text in a second language. Focusing on ESL participants and using a log-linear classifier with word fixation times normalized for reading speed as features, we obtain 71.03 NLIR accuracy in the shared sentences regime. We further demonstrate that NLIR can be generalized effectively to the individual sentences regime, in which each subject reads a different set of sentences, by grouping fixations according to linguistically motivated clustering criteria. In this regime, we obtain an NLIR accuracy of 51.03. Further on, we provide classification and feature analyses, suggesting that the signal underlying NLIR is likely to be related to linguistic characteristics of the respective native languages. First, drawing on previous work on ESL production, we 541 observe that classifier uncertainty in NLIR correlates with global linguistic similarities across native languages. In other words, the more similar are the languages, the more similar are the reading patterns of their native speakers in English. Second, we perform feature analysis across native and non-native English speakers, and discuss structural and lexical factors that could potentially drive some of the non-native reading patterns in each of our native languages. Taken together, our results provide evidence for a systematic influence of native language properties on reading, and by extension, on online processing and comprehension in a second language. To summarize, we introduce a novel framework for studying cross-linguistic influence in language learning by using eyetracking for reading free-form English text. We demonstrate the utility of this framework in the following ways. First, we obtain the first NLIR results, addressing both the shared and the individual textual input scenarios. We further show that reading preserves linguistic similarities across native languages of ESL readers, and perform feature analysis, highlighting key distinctive reading patterns in each native language. The proposed framework complements and extends production studies, and can inform linguistic inquiry on cross-linguistic influence. This paper is structured as follows. In section 2 we present the data and our experimental setup. Section 3 describes our approach to NLIR and summarizes the classification results. We analyze cross-linguistic influence in reading in section 4. In section 4.1 we examine NLIR classification uncertainty in relation to linguistic similarities between native languages. In section 4.2 we discuss several key fixation features associated with different native languages. Section 5 surveys related work, and section 6 concludes. 2 Experimental Setup Participants We recruited 182 adult participants. Of those, 37 are native English speakers and 145 are ESL learners from four native language backgrounds: Chinese, Japanese, Portuguese and Spanish. All the participants in the experiment are native speakers of only one language. The ESL speakers were tested for English proficiency using the grammar and listening sections of the Michigan English test (MET), which consist of 50 multiple choice questions. The English proficiency score was calculated as the number of correctly answered questions on these modules. The majority of the participants scored in the intermediate-advanced proficiency range. Table 1 presents the number of participants and the mean English proficiency score for each native language group. Additionally, we collected metadata on gender, age, level of education, duration of English studies and usage, time spent in English speaking countries and proficiency in any additional language spoken. # Participants English Score Chinese 36 42.0 Japanese 36 40.3 Portuguese 36 41.1 Spanish 37 42.4 English 37 NA Table 1: Number of participants and mean MET English score by native language group. Reading Materials We utilize 14,274 randomly selected sentences from the Wall Street Journal part of the Penn Treebank (WSJ-PTB) (Marcus et al., 1993). To support reading convenience and measurement precision, the maximal sentence length was set to 100 characters, leading to an average sentence length of 11.4 words. Word boundaries are defined as whitespaces. From this sentence pool, 78 sentences (900 words) were presented to all participants (henceforth shared sentences) and the remaining 14,196 sentences were split into 182 individual batches of 78 sentences (henceforth individual sentences, averaging 880 words per batch). All the sentences include syntactic annotations from the Universal Dependency Treebank project (UDT) (McDonald et al., 2013). The annotations include PTB POS tags (Santorini, 1990), Google universal POS tags (Petrov et al., 2012) and dependency trees. The dependency annotations of the UDT are converted automatically from the manual phrase structure tree annotations of the WSJ-PTB. Gaze Data Collection Each participant read 157 sentences. The first sentence was presented to familiarize participants with the experimental setup and was discarded during analysis. The following 156 sentences consisted of 78 shared and 78 individual sen542 tences. The shared and the individual sentences were mixed randomly and presented to all participants in the same order. The experiment was divided into three parts, consisting of 52 sentences each. Participants were allowed to take a short break between experimental parts. Each sentence was presented on a blank screen as a one-liner. The text appeared in Times font, with font size 23. To encourage attentive reading, upon completion of sentence reading participants answered a simple yes/no question about its content, and were subsequently informed if they answered the question correctly. Both the sentences and the questions were triggered by a 300ms gaze on a fixation target (fixation circle for sentences and the letter “Q” for questions) which appeared on a blank screen and was co-located with the beginning of the text in the following screen. Throughout the experiment, participants held a joystick with buttons for indicating completion of sentence reading and answering the comprehension questions. Eye-movement of participants’ dominant eye was recorded using a desktop mount Eyelink 1000 eyetracker, at a sampling rate of 1000Hz. Further details on the experimental setup are provided in appendix A. 3 Native Language Identification from Reading Our first goal is to determine whether the native language of ESL learners can be decoded from their gaze patterns while reading English text. We address this question in two regimes, corresponding to our division of reading input into shared and individual sentences. In the shared regime, all the participants read the same set of sentences. Normalizing over the reading input, this regime facilitates focusing on differences in reading behavior across readers. In the individual regime, we use the individual batches from our data to address the more challenging variant of the NLIR task in which the reading material given to each participant is different. 3.1 Features We seek to utilize features that can provide robust, simple and interpretable characterizations of reading patterns. To this end, we use speed normalized fixation duration measures over word sequences. Fixation Measures We utilize three measures of word fixation duration: • First Fixation duration (FF) Duration of the first fixation on a word. • First Pass duration (FP) Time spent from first entering a word to first leaving it (including re-fixations within the word). • Total Fixation duration (TF) The sum of all fixation times on a word. We experiment with fixations over unigram, bigram and trigram sequences seqi,k = wi, ..., wi+k−1, k ∈{1, 2, 3}, where for each metric M ∈{FF, FP, TF} the fixation time for a sequence Mseqi,k is defined as the sum of fixations on individual tokens Mw in the sequence2. Mseqi,k = X w′∈seqi,k Mw′ (1) Importantly, we control for variation in reading speeds across subjects by normalizing each subjects’s sequence fixation times. For each metric M and sequence seqi,k we normalize the sequence fixation time Mseqi,k relative to the subject’s sequence fixation times in the textual context of the sequence. The context C is defined as the sentence in which the sequence appears for the Words in Fixed Context feature-set and the entire textual input for the Syntactic and Information clusters feature-sets (see definitions of feature-sets below). The normalization term SM,C,k is accordingly defined as the metric’s fixation time per sequence of length k in the context: SM,C,k = 1 |C| X seqk∈C Mseqk (2) We then obtain a normalized fixation time Mnormseqi,k as: Mnormseqi,k = Mseqi,k SM,C,k (3) 2Note that for bigrams and trigrams, one could also measure FF and FP for interest regions spanning the sequence, instead, or in addition to summing these fixation times over individual tokens. 543 Feature Types We use the above presented speed normalized fixation metrics to extract three feature-sets, Words in Fixed Context (WFC), Syntactic Clusters (SC) and Information Clusters (IC). WFC is a token-level feature-set that presupposes a fixed textual input for all participants. It is thus applicable only in the shared sentences regime. SC and IC are typelevel features which provide abstractions over sequences of words. Crucially, they can also be applied when participants read different sentences. • Words in Fixed Context (WFC) The WFC features capture fixation times on word sequences in a specific sentence. This featureset consists of FF, FP and TF times for each of the 900 unigram, 822 bigram, and 744 trigram word sequences comprising the shared sentences. The fixation times of each metric are normalized for each participant relative to their fixations on sequences of the same length in the surrounding sentence. As noted above, the WFC feature-set is not applicable in the individual regime, as it requires identical sentences for all participants. • Syntactic Clusters (SC) CS features are average globally normalized FF, FP and TF times for word sequences clustered by our three types of syntactic labels: universal POS, PTB POS, and syntactic relation labels. An example of such a feature is the average of speed-normalized TF times spent on the PTB POS bigram sequence DT NN. We take into account labels that appear at least once in the reading input of all participants. On the four non-native languages, considering all three label types, we obtain 104 unigram, 636 bigram and 1,310 trigram SC features per fixation metric in the shared regime, and 56 unigram, 95 bigram and 43 trigram SC features per fixation metric in the individual regime. • Information Clusters (IC) We also obtain average FF, FP and TF for words clustered according to their length, measured in number of characters. Word length was previously shown to be a strong predictor of information content (Piantadosi et al., 2011). As such, it provides an alternative abstraction to the syntactic clusters, combining both syntactic and lexical information. As with SC features, we take into account features that appear at least once in the textual input of all participants. For our set of non-native languages, we obtain for each fixation metric 15 unigram, 21 bigram and 23 trigram IC features in the shared regime, and 12 unigram, 18 bigram and 18 trigram IC features in the individual regime. Notably, this feature-set is very compact, and differently from the syntactic clusters, does not rely on the availability of external annotations. In each feature-set, we perform a final preprocessing step for each individual feature, in which we derive a zero mean unit variance scaler from the training set feature values, and apply it to transform both the training and the test values of the feature to Z scores. 3.2 Model The experiments are carried out using a log-linear model: p(y|x; θ) = exp(θ · f(x, y)) P y′∈Y exp(θ · f(x, y′)) (4) where y is the reader’s native language, x is the reading input and θ are the model parameters. The classifier is trained with gradient descent using LBFGS (Byrd et al., 1995). 3.3 Experimental Results In table 2 we report 10-fold cross-validation results on NLIR in the shared and the individual experimental regimes for native speakers of Chinese, Japanese, Portuguese and Spanish. We introduce two baselines against which we compare the performance of our feature-sets. The majority baseline selects the native language with the largest number of participants. The random clusters baseline clusters words into groups randomly, with the number of groups set to the number of syntactic categories in our data. In the shared regime, WFC fixations yield the highest classification rates, substantially outperforming the cluster feature-sets and the two baselines. The strongest result using this featureset, 71.03, is obtained by combining unigram, bigram and trigram fixation times. In addition to this outcome, we note that training binary classifiers in this setup yields accuracies ranging from 68.49 for the language pair Portuguese and Spanish, to 93.15 for Spanish and Japanese. These results confirm the effectiveness of the shared input 544 Shared Sentences Regime Individual Sentences Regime Majority Class 25.52 25.52 Random Clusters 22.76 22.07 unigrams +bigrams +trigrams unigrams +bigrams +trigrams Information Clusters (IC) 41.38 44.14 46.21 38.62 32.41 32.41 Syntactic Clusters (SC) 45.52 57.24 58.62 48.97 43.45 48.28 SC+IC 51.72 57.24 60.0 51.03 46.21 49.66 Words in Fixed Context (WFC) 64.14 68.28 71.03 NA Table 2: Native Language Identification from Reading results with 10-fold cross-validation for native speakers of Chinese, Japanese, Portuguese and Spanish. In the Shared regime all the participants read the same 78 sentences. In the Individual regime each participant reads a different set of 78 sentences. regime for performing reliable NLIR, and suggest a strong native language signal in non-native reading fixation times. SC features yield accuracies of 45.52 to 58.62 on the shared sentences, while IC features exhibit weaker performance in this regime, with accuracies of 41.38 to 46.21. Both results are well above chance, but lower than WFC fixations due to the information loss imposed by the clustering step. Crucially, both feature-sets remain effective in the individual input regime, with 43.45 to 48.97 accuracy for SC features and 32.41 to 38.62 accuracy for IC features. The strongest result in the individual regime is 51.03, obtained by concatenating IC and SC features over unigrams. We also note that using this setup in a binary classification scheme yields results ranging from chance level 49.31 for Portuguese versus Spanish, to 84.93 on Spanish versus Japanese. Generally, we observe that adding bigram and trigram fixations in the shared regime leads to performance improvements compared to using unigram features only. This trend does not hold for the individual sentences, presumably due to a combination of feature sparsity and context variation in this regime. We also note that IC and SC features tend to perform better together than in separation, suggesting that the information encoded using these feature-sets is to some extent complementary. The generalization power of our cluster based feature-sets has both practical and theoretical consequences. Practically, they provide useful abstractions for performing NLIR over arbitrary textual input. That is, they enable performing this task using any textual input during both training and testing phases. Theoretically, the effectiveness of linguistically motivated features in discerning native languages suggests that linguistic factors play an important role in the ESL reading process. The analysis presented in the following sections will further explore this hypothesis. 4 Analysis of Cross-Linguistic Influence in ESL Reading As mentioned in the previous section, the ability to perform NLIR in general, and the effectiveness of linguistically motivated features in particular, suggest that linguistic factors in the native and second languages are pertinent to ESL reading. In this section we explore this hypothesis further, by analyzing classifier uncertainty and the features learned in the NLIR task. 4.1 Preservation of Linguistic Similarity Previous work in NLP suggested a link between textual patterns in ESL production and linguistic similarities of the respective native languages (Nagata and Whittaker, 2013; Nagata, 2014; Berzak et al., 2014, 2015). In particular, Berzak et al. (2014) has demonstrated that NLI classification uncertainty correlates with similarities between languages with respect to their typological features. Here, we extend this framework and examine if preservation of native language similarities in ESL production is paralleled in reading. Similarly to Berzak et al. (2014) we define the classification uncertainty for a pair of native languages y and y′ in our data collection D, as the average probability assigned by the NLIR classifier to one language given the other being the true native language. This approach provides a robust measure of classification confusion that does not rely on the actual performance of the classifier. We interpret the classifier uncertainty as a similarity measure between the respective languages and de545 note it as English Reading Similarity ERS. ERSy,y′ = P (x,y)∈Dy p(y′|x;θ)+ P (x,y′)∈Dy′ p(y|x;θ) |Dy|+|Dy′| (5) We compare these reading similarities to the linguistic similarities between our native languages. To approximate these similarities, we utilize feature vectors from the URIEL Typological Compendium (Littel et al., 2016) extracted using the lang2vec tool (Littell et al., 2017). URIEL aggregates, fuses and normalizes typological, phylogenetic and geographical information about the world’s languages. We obtain all the 103 available morphosyntactic features in URIEL, which are derived from the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013), Syntactic Structures of the World’s Languages (SSWL) (Collins and Kayne, 2009) and Ethnologue (Lewis et al., 2015). Missing feature values are completed with a KNN classifier. We also extract URIEL’s 3,718 language family features derived from Glottolog (Hammarström et al., 2015). Each of these features represents membership in a branch of Glottolog’s world language tree. Truncating features with the same value for all our languages, we remain with 76 features, consisting of 49 syntactic features and 27 family tree features. The linguistic similarity LS between a pair of languages y and y′ is then determined by the cosine similarity of their URIEL feature vectors. LSy,y′ = vy · vy′ ∥vy∥∥vy′∥ (6) Figure 1 presents the URIEL based linguistic similarities for our set of non-native languages against the average NLIR classification uncertainties on the cross-validation test samples. The results presented in this figure are based on the unigram IC+SC feature-set in the individual sentences regime. We also provide a graphical illustration of the language similarities for each measure, using the Ward clustering algorithm (Ward Jr, 1963). We observe a correlation between the two measures which is also reflected in similar hierarchies in the two language trees. Thus, linguistically motived features in English reveal linguistic similarities across native languages. This outcome supports the hypothesis that English reading differences across native languages are related to linguistic factors. 0.0 0.2 0.4 0.6 0.8 1.0 Linguistic Similarity 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 NLIR Classification Uncertainty Chinese Japanese Chinese Portuguese Chinese Spanish Japanese Portuguese Japanese Spanish Portuguese Spanish (a) Linguistic similarities against mean NLIR classification uncertainty. Error bars denote standard error. Portuguese Spanish Chinese Japanese 0.0 0.5 1.0 1.5 (b) Linguistic tree Portuguese Spanish Chinese Japanese 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 (c) English reading tree Figure 1: (a) Linguistic versus English reading language similarities. The horizontal axis represents typological and phylogenetic similarity between languages, obtained by vectorizing linguistic features form URIEL, and measuring their cosine similarity. The vertical axis represents the average uncertainty of the NLIR classifier in distinguishing ESL readers of each language pair. (b) Ward hierarchical clustering of linguistic similarities between languages. (c) Ward hierarchical clustering of NLIR average pairwise classification uncertainties. We note that while comparable results are obtained for the IC and SC feature-sets, together and in separation in the shared regime, WFC features in the shared regime do not exhibit a clear uncertainty distinction when comparing across the pairs Japanese and Spanish, Japanese and Portuguese, Chinese and Spanish, and Chinese and Portuguese. Instead, this feature-set yields very low uncertainty, and correspondingly very high performance ranging from 90.41 to 93.15, for all four language pairs. 546 4.2 Feature Analysis Our framework enables not only native language classification, but also exploratory analysis of native language specific reading patterns in English. The basic question that we examine in this respect is on which features do readers of different native language groups spend more versus less time. We also discuss several potential relations of the observed reading time differences to usage patterns and grammatical errors committed by speakers of our four native languages in production. We obtain this information by extracting grammatical error counts from the CLC FCE corpus (Yannakoudakis et al., 2011), and from the ngram frequency analysis in Nagata and Whittaker (2013). In order to obtain a common benchmark for reading time comparisons across non-native speakers, in this analysis we also consider our group of native English speakers. In this context, we train four binary classifiers that discern each of the non-native groups from native English speakers based on TF times over unigram PTB POS tags in the shared regime. The features with the strongest positive and negative weights learned by these classifiers are presented in table 3. These features serve as a reference point for selecting the case studies discussed below. Interestingly, some of the reading features that are most predictive of each native language lend themselves to linguistic interpretation with respect to structural factors. For example, in Japanese and Chinese we observe shorter reading times for determiners (DT), which do not exist in these languages. Figure 2a presents the mean TF times for determiners in all five native languages, suggesting that native speakers of Portuguese and Spanish, which do have determiners, do not exhibit reduced reading times on this structure compared to natives. In ESL production, missing determiner errors are the most frequent error for native speakers of Japanese and third most common error for native speakers of Chinese. In figure 2b we present the mean TF reading times for pronouns (PRP), where we also see shorter reading times by natives of Japanese and Chinese as compared to English natives. In both languages pronouns can be omitted both in object and subject positions. Portuguese and Spanish, in which pronoun omission is restricted to the subject position present similar albeit weaker tendency. Negative (Fast) Positive (Slow) Chinese DT JJR PRP NN Japanese DT NN CD VBD Portuguese NNS NN-POS PRP VBZ Spanish NNS MD PRP RB Table 3: PTB POS features with the strongest weights learned in non-native versus native classification for each native language in the shared regime. Feature types presented in figure 2 are highlighted in bold. Chinese English Japanese Portuguese Spanish 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 (a) Determiners (DT) Chinese English Japanese Portuguese Spanish 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 (b) Pronouns (PRP) Chinese English Japanese Portuguese Spanish 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 (c) Possessives (NN+POS) Chinese English Japanese Portuguese Spanish 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 (d) Nouns (NN) Figure 2: Mean speed-normalized Total Fixation duration for Determiners (DT), Pronouns (PRP), singular noun possessives (NN+POS), and singular nouns (NN) appearing in the shared sentences. Error bars denote standard error. In figure 2c we further observe that differently from natives of Chinese and Japanese, native speakers of Portuguese and Spanish spend more time on NN+POS in head final possessives such as “the public’s confidence”. While similar constructions exist in Chinese and Japanese, the NN+POS combination is expressed in Portuguese and Spanish as a head initial NN of NN. This form exists in English (e.g. “the confidence of the public”) and is preferred by speakers of these languages in ESL writing (Nagata and Whittaker, 2013). As an additional baseline for this construction, we provide the TF times for NN in figure 2d. There, relative to English natives, we observe longer reading times for Japanese and Chinese and comparable times for Portuguese and Spanish. The reading times of NN in figure 2d also give 547 rise to a second, potentially competing interpretation of differences in ESL reading times, which highlights lexical rather than structural factors. According to this interpretation, increased reading times of nouns are the result of substantially smaller lexical sharing with English by Chinese and Japanese as compared to Spanish and Portuguese. Given the utilized speed normalization, lexical effects on nouns could in principle account for reduced reading times on determiners and pronouns. Conversely, structural influence leading to reduced reading times on determiners and pronouns could explain longer dwelling on nouns. A third possibility consistent with the observed reading patterns would allow for both structural and lexical effects to impact second language reading. Importantly, in each of these scenarios, ESL reading patterns are related to linguistic factors of the reader’s native language. We note that the presented analysis is preliminary in nature, and warrants further study in future research. In particular, reading times and classifier learned features may in some cases differ between the shared and the individual regimes. In the examples presented above, similar results are obtained in the individual sentences regime for DT, PRP and NN. The trend for the NN+POS construction, however, diminishes in that setup with similar reading times for all languages. On the other hand, one of the strongest features for predicting Portuguese and Spanish in the individual regime are longer reading times for prepositions (IN), an outcome that holds in the shared regime only relative to Chinese and Japanese, but not relative to native speakers of English. Despite these caveats, our results suggest that reading patterns can potentially be related to linguistic factors of the reader’s native language. This analysis can be extended in various ways, such as inclusion of additional feature types and fixation metrics, as well as utilization of other comparative methodologies. Combined with evidence from language production, this line of investigation can be instrumental for informing linguistic theory of cross-linguistic influence. 5 Related Work Eyetracking and second language reading Second language reading has been studied using eyetracking, with much of the work focusing on processing of syntactic ambiguities and analysis of specific target word classes such as cognates (Dussias, 2010; Roberts and Siyanova-Chanturia, 2013). In contrast to our work, such studies typically use controlled, rather than free-form sentences. Investigation of global metrics in freeform second language reading was introduced only recently by Cop et al. (2015). This study compared ESL and native reading of a novel by native speakers of Dutch, observing longer sentence reading times, more fixations and shorter saccades in ESL reading. Differently from this study, our work focuses on comparison of reading patterns between different native languages. We also analyze a related, but different metric, namely speed normalized fixation durations on word sequences. Eyetracking for NLP tasks Recent work in NLP has demonstrated that reading gaze can serve as a valuable supervision signal for standard NLP tasks. Prominent examples of such work include POS tagging (Barrett and Søgaard, 2015a; Barrett et al., 2016), syntactic parsing (Barrett and Søgaard, 2015b) and sentence compression (Klerke et al., 2016). Our work also tackles a traditional NLP task with free-form text, but differs from this line of research in that it addresses this task only in comprehension. Furthermore, while these studies use gaze recordings of native readers, our work focuses on non-native readers. NLI in production NLI was first introduced in Koppel et al. (2005) and has been drawing considerable attention in NLP, including a recent shared-task challenge with 29 participating teams (Tetreault et al., 2013). NLI has also been driving much of the work on identification of native language related features in writing (Tsur and Rappoport, 2007; Jarvis and Crossley, 2012; Brooke and Hirst, 2012; Tetreault et al., 2012; Swanson and Charniak, 2013, 2014; Malmasi and Dras, 2014; Bykh and Meurers, 2016). Several studies have also linked usage patterns and grammatical errors in production to linguistic properties of the writer’s native language (Nagata and Whittaker, 2013; Nagata, 2014; Berzak et al., 2014, 2015). Our work departs from NLI in writing and introduces NLI and related feature analysis in reading. 6 Conclusion and Outlook We present a novel framework for studying crosslinguistic influence in multilingualism by measuring gaze fixations during reading of free-form En548 glish text. We demonstrate for the first time that this signal can be used to determine a reader’s native language. The effectiveness of linguistically motivated criteria for fixation clustering and our subsequent analysis suggest that the ESL reading process is affected by linguistic factors. Specifically, we show that linguistic similarities between native languages are reflected in similarities in ESL reading. We also identify several key features that characterize reading in different native languages, and discuss their potential connection to structural and lexical properties of the native langauge. The presented results demonstrate that eyetracking data can be instrumental for developing predictive and explanatory models of second language reading. While this work is focused on NLIR from fixations, our general framework can be used to address additional aspects of reading, such as analysis of saccades and gaze trajectories. In future work, we also plan to explore the role of native and second language writing system characteristics in second language reading. More broadly, our methodology introduces parallels with production studies in NLP, creating new opportunities for integration of data, methodologies and tasks between production and comprehension. Furthermore, it holds promise for formulating language learning theory that is supported by empirical findings in naturalistic setups across language processing domains. Acknowledgements We thank Amelia Smith, Emily Weng, Run Chen and Lila Jansen for contributions to stimuli preparation and data collection. We also thank Andrei Barbu, Guy Ben-Yosef, Yen-Ling Kuo, Roger Levy, Jonathan Malmaud, Karthik Narasimhan and the anonymous reviewers for valuable feedback on this work. This material is based upon work supported by the Center for Brains, Minds, and Machines (CBMM), funded by NSF STC award CCF-1231216. References Rosa Alonso Alonso. 2015. Crosslinguistic Influence in Second Language Acquisition, volume 95. Multilingual Matters. Maria Barrett, Joachim Bingel, Frank Keller, and Anders Søgaard. 2016. Weakly supervised part-ofspeech tagging using eye-tracking data. In ACL. volume 2, pages 579–584. Maria Barrett and Anders Søgaard. 2015a. Reading behavior predicts syntactic categories. In CoNLL. pages 345–349. Maria Barrett and Anders Søgaard. 2015b. Using reading behavior to predict grammatical functions. In Proceedings of the Sixth Workshop on Cognitive Aspects of Computational Language Learning. pages 1–5. Éva Berkes and Suzanne Flynn. 2012. Multilingualism: New perspectives on syntactic development. The Handbook of Bilingualism and Multilingualism, Second Edition pages 137–167. Yevgeni Berzak, Roi Reichart, and Boris Katz. 2014. Reconstructing native language typology from foreign language usage. In Eighteenth Conference on Computational Natural Language Learning (CoNLL). Yevgeni Berzak, Roi Reichart, and Boris Katz. 2015. Contrastive analysis with predictive power: Typology driven estimation of grammatical error distributions in esl. In Conference on Computational Natural Language Learning (CoNLL). Julian Brooke and Graeme Hirst. 2012. Measuring interlanguage: Native language identification with l1influence metrics. In LREC. pages 779–784. Serhiy Bykh and Detmar Meurers. 2016. Advancing linguistic features and insights by label-informed feature grouping: An exploration in the context of native language identification. In COLING. Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. 1995. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing 16(5):1190–1208. Chris Collins and Richard Kayne. 2009. Syntactic Structures of the world’s languages. http://sswl.railsplayground.net. Uschi Cop, Denis Drieghe, and Wouter Duyck. 2015. Eye movement patterns in natural reading: A comparison of monolingual and bilingual reading of a novel. PLOS ONE 10(8):1–38. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. http://wals.info/. Paola E Dussias. 2010. Uses of eye-tracking data in second language sentence processing research. Annual Review of Applied Linguistics 30:149–166. Harald Hammarström, Robert Forkel, Martin Haspelmath, and Sebastian Bank. 2015. Glottolog 2.6. Leipzig: Max Planck Institute for Evolutionary Anthropology. http://glottolog.org. 549 Scott Jarvis and Scott A Crossley. 2012. Approaching Language Transfer Through Text Classification: Explorations in the Detection-based Approach, volume 64. Multilingual Matters. Scott Jarvis and Aneta Pavlenko. 2008. Crosslinguistic influence in language and cognition. Routledge. Sigrid Klerke, Yoav Goldberg, and Anders Søgaard. 2016. Improving sentence compression by learning to predict gaze. NAACL-HLT . Moshe Koppel, Jonathan Schler, and Kfir Zigdon. 2005. Determining an author’s native language by mining a text for errors. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. ACM, pages 624–628. Paul M. Lewis, Gary F. Simons, and Charles D. Fennig, editors. 2015. Ethnologue: Languages of the World. SIL International, Dallas, Texas. http://www.ethnologue.com. Patrick Littel, David Mortensen, and Lori Levin, editors. 2016. URIEL Typological Database. Pittsburgh: Carnegie Mellon University. http://www.cs.cmu.edu/ dmortens/uriel.html. Patrick Littell, David Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. EACL 2017 page 8. Shervin Malmasi and Mark Dras. 2014. Language transfer hypotheses with linear svm weights. In EMNLP. pages 1385–1390. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics 19(2):313–330. Gita Martohardjono and Suzanne Flynn. 1995. Language transfer: what do we really mean. In L. Eubank, L. Selinker, and M. Sharwood Smith, editors, The current state of Interlanguage: studies in honor of William E. Rutherford, John Benjamins: The Netherlands, pages 205–219. Ryan T McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith B Hall, Slav Petrov, Hao Zhang, Oscar Täckström, et al. 2013. Universal dependency annotation for multilingual parsing. In ACL. pages 92–97. Ryo Nagata. 2014. Language family relationship preserved in non-native english. In COLING. pages 1940–1949. Ryo Nagata and Edward W. D. Whittaker. 2013. Reconstructing an indo-european family tree from nonnative english texts. In ACL. pages 1137–1147. Terence Odlin. 1989. Language transfer: Crosslinguistic influence in language learning. Cambridge University Press. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In LREC. Steven T Piantadosi, Harry Tily, and Edward Gibson. 2011. Word lengths are optimized for efficient communication. Proceedings of the National Academy of Sciences 108(9):3526–3529. Leah Roberts and Anna Siyanova-Chanturia. 2013. Using eye-tracking to investigate topics in l2 acquisition and l2 processing. Studies in Second Language Acquisition 35(02):213–235. Beatrice Santorini. 1990. Part-of-speech tagging guidelines for the penn treebank project (3rd revision). Technical Reports (CIS) . Ben Swanson and Eugene Charniak. 2013. Extracting the native language signal for second language acquisition. In HLT-NAACL. pages 85–94. Ben Swanson and Eugene Charniak. 2014. Data driven language transfer hypotheses. EACL page 169. Joel Tetreault, Daniel Blanchard, and Aoife Cahill. 2013. A report on the first native language identification shared task. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. Citeseer, pages 48–57. Joel R Tetreault, Daniel Blanchard, Aoife Cahill, and Martin Chodorow. 2012. Native tongues, lost and found: Resources and empirical evaluations in native language identification. In COLING. pages 2585–2602. Oren Tsur and Ari Rappoport. 2007. Using classifier features for studying the effect of native language on the choice of written second language words. In Proceedings of the Workshop on Cognitive Aspects of Computational Language Acquisition. Association for Computational Linguistics, pages 9–16. Joe H Ward Jr. 1963. Hierarchical grouping to optimize an objective function. Journal of the American statistical association 58(301):236–244. Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading esol texts. In ACL. pages 180–189. A Supplemental Material Eyetracking Setup We use a 44.5x30cm screen with 1024x768px resolution to present the reading materials, and a desktop mount Eyelink 1000 eyetracker (1000Hz) to record gaze. The screen, eyetracker camera and chinrest are horizontally aligned on a table surface. The screen center (x=512, y=384) is 79cm away from the center of 550 the forehead bar, and 13cm below it. The eyetracker camera knob is 65cm away from forehead bar. Throughout the experiment participants hold a joystick with a button for indicating sentence completion, and two buttons for answering yes/no questions. We record gaze of the participant’s dominant eye. Text Parameters All the textual material in the experiment is presented using Times font, normal style, with font size 23. In our setup, this corresponds to 0.36 degrees (11.3px) average lower case letter width, and 0.49 degrees (15.7px) average upper case letter width. We chose a nonmonospace font, as such fonts are generally more common in reading. They are also more compact compared to monospace fonts, allowing to substantially increase the upper limit for sentence length. Calibration We use 3H line calibration with point repetition on the central horizontal line (y=384), using 16px outer circle, 6px inner circle, fixation points. At least three calibrations are performed during the experiment, one at the beginning of each experimental section. We also recalibrate upon failure to produce a 300ms fixation on any fixation trigger preceding a sentence or a question within 4 seconds after its appearance. The mean validation error for calibrations across subjects is 0.146 degrees (std 0.038). 551
2017
50
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 552–561 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1051 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 552–561 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1051 MORSE: Semantic-ally Drive-n MORpheme SEgment-er Tarek Sakakini University of Illinois Urbana, IL 61820 [email protected] Suma Bhat University of Illinois Urbana, IL 61820 [email protected] Pramod Viswanath University of Illinois Urbana, IL 61820 [email protected] Abstract In this paper we present a novel framework for morpheme segmentation which uses the morpho-syntactic regularities preserved by word representations, in addition to orthographic features, to segment words into morphemes. This framework is the first to consider vocabulary-wide syntactico-semantic information for this task. We also analyze the deficiencies of available benchmarking datasets and introduce our own dataset that was created on the basis of compositionality. We validate our algorithm across different datasets and languages and present new state-of-the-art results. 1 Introduction Morpheme segmentation is a core natural language processing (NLP) task used as an integral component in related-fields such as information retrieval (IR) (Zieman and Bleich, 1997; Kurimo et al., 2007), automatic speech recognition (ASR) (Bilmes and Kirchhoff, 2003; Kurimo et al., 2006), and machine translation (MT) (Lee, 2004; Virpioja et al., 2007). Most previous works have relied solely on orthographic features (Harris, 1970; Goldsmith, 2000; Creutz and Lagus, 2002, 2005, 2007), neglecting the underlying semantic information. This has led to an over-segmentation of words because a change of the surface form pattern is a necessary but insufficient indication of a morphological change. For example, the surface form of “freshman”, hints that it should be segmented to “fresh-man”, although “freshman” does not describe semantically the compositional meaning of “fresh” and “man”. To compensate for this lack of semantic knowledge, previous works (Schone and Jurafsky, 2000; Baroni et al., 2002; Narasimhan et al., 2015) have incorporated semantic knowledge locally by checking the semantic relatedness of possibly morphologically related pair of words. Narasimhan et al. (2015) check for semantic relatedness using cosine similarity in word representations (Mikolov et al., 2013a; Pennington et al., 2014). A limitation of such an approach is the inherent “sample noise” in specific word representations (exacerbated in the case of rare words). Moreover, limitation to local comparison enforces modeling morphological relations via semantic relatedness, although it has been shown that difference vectors model morphological relations more accurately (Mikolov et al., 2013b). To address this issue, we introduce a new framework (MORSE), the first to bring semantics into morpheme segmentation both on a local and a vocabulary-wide level. That is, when checking for the morphological relation between two words, we not only check for the semantic relatedness of the pair at hand (local), but also check if the difference vectors of pairs showing similar orthographic change are consistent (vocabulary-wide). In summary, MORSE clusters pairs of words which only vary by an affix; for example, pairs such as (“quick”, “quickly”) and (“hopeful”, “hopefully”) get clustered together. To verify the cluster of a specific affix from a semantic corpuswide standpoint, we check for the consistency of the difference vectors (Mikolov et al., 2013b). To evaluate it from an orthographic corpus-wide perspective, we check for the size of each cluster of an affix. To evaluate each pair in a cluster locally from a semantic standpoint, we check if a pair of words in a valid affix cluster are morphologically related by checking if its difference vector is consistent with other members in the cluster and if the words in the pair are semantically related (i.e. close in the vector space). The reason for local 552 evaluations is exemplified by (“on”,“only”) which belongs to the cluster of a valid affix (“ly”), although they are not (obviously) morphologically related. We would expect such a pair to fail the last two local evaluation methods. Our proposed segmentation algorithm is evaluated using benchmarking datasets from the Morpho Challenge (MC) for multiple languages and a newly introduced dataset for English which compensates for lack of discriminating capabilities in the MC dataset. Experiments reveal that our proposed framework not only outperforms the widely used approach, but also performs better than published state-of-the-art results. The central contribution of this work is a novel framework that performs morpheme segmentation resulting in new state-of-the-art results. To the best of our knowledge this is the first unsupervised approach to consider the vocabulary-wide semantic knowledge of words and their affixes in addition to relying on their surface forms. Moreover we point out the deficiencies in the MC datasets with respect to the compositionality of morphemes and introduce our own dataset free of these deficiencies. 2 Related Work Extensive work has been done in morphology learning, with tasks such as morphological analysis (Baayen et al., 1993), morphological reinflection (Cotterell et al., 2016), and morpheme segmentation. Given the less complex nature of morpheme segmentation in comparison to the other tasks, most systems developed for morpheme segmentation have been unsupervised or minimally supervised (mostly for parameter tuning). Unsupervised morpheme segmentation traces back to (Harris, 1970), which falls under the framework of Letter Successor Variety (LSV) which builds on the hypothesis that predictability of successor letters is high within morphemes and low otherwise. The most dominant pieces of work on unsupervised morpheme segmentation, Morfessor (Creutz and Lagus, 2002, 2005, 2007) and Linguistica (Goldsmith, 2000) adopt the Minimum Description Length (MDL) principle (Rissanen, 1998): they aim to minimize describing the lexicon of morphs as well as minimizing the description of an input corpus. Morfessor has a widely used API and has inspired a large body of following work (Kohonen et al., 2010; Gr¨onroos et al., 2014). The unsupervised original implementation was later adapted (Kohonen et al., 2010; Gr¨onroos et al., 2014) to allow for minimal supervision. Another work on minimally supervised morpheme segmentation is (Sirts and Goldwater, 2013) which relies on Adaptor Grammars (AGs) (Johnson et al., 2006). AGs learn latent tree structures over an input corpus using a nonparametric Bayesian model (Sirts and Goldwater, 2013). (Lafferty et al., 2001) use Conditional Random Fields (CRF) for morpheme segmentation. In this supervised method, the morpheme segmentation task is modeled as a sequence-to-sequence learning problem, whereby the sequence of labels defines the boundaries of morphemes (Ruokolainen et al., 2013, 2014). In contrast to the previously mentioned generative approaches of MDL and AG, this method takes a discriminative approach and allows for the inclusion of a larger set of features. In this approach, CRF learns a conditional probability of a segmentation given a word (Ruokolainen et al., 2013, 2014). All these morpheme segmenters rely solely on orthographic features of morphemes. Semantics were initially introduced to morpheme segmenters by (Schone and Jurafsky, 2000), using LSA to generate word representations and then evaluate if two words are morphologically related based on semantic relatedness, as well as deterministic orthographic methods. Similarly, (Baroni et al., 2002) use edit distance and mutual information as metrics for semantic and orthographic validity of a morphological relation between two words. Recent work in (Narasimhan et al., 2015), inspired by the log-linear model in (Poon et al., 2009) incorporates semantic relatedness into the model via word representations. Other systems such as ( ¨Ust¨un and Can, 2016) rely solely on evaluating two words from a semantic standpoint by the use of a twolayer neural network. MORSE introduces semantic information into its morpheme segmenters via distributed word representations while also relying on orthographic features. Inspired by the work of (Soricut and Och, 2015), instead of merely evaluating semantic relatedness, we are the first to evaluate the morphological relationship via the difference vector of morphologically related words. Comparing the difference vectors of multiple pairs across the corpus following the same morphological relation, gives 553 MORSE a vocabulary-wide evaluation of morphological relations learned. 3 System The key limitation of previous frameworks that rely solely on orthographic features is the resulting over-segmentation. As an example, MDLbased frameworks segment “sing” to “s-ing” due to the high frequency of the morphemes: “s” and “ing”. Our framework combines semantic relatedness with orthographic relatedness to eliminate such error. For the example mentioned, MORSE validates morphemes such as “s” and “ing” from an orthographic perspective, yet invalidates the relation between “s” and “sing” from a local and vocabulary-wide semantic perspective. Hence, MORSE will segment “jumping” as “jump-ing”, and perform no segmentations on “sing”. To bring in semantic understanding into MORSE, we rely on word representations (Mikolov et al., 2013a; Pennington et al., 2014). These word representations capture the semantics of the vocabulary through statistics over the context in which they appear. Moreover, morphosyntactic regularities have been shown over these word representations, whereby pairs of words sharing the same relationship exhibit equivalent difference vectors (Mikolov et al., 2013b). For example, it is expected in the vector space of word representations that ⃗wjumping ´ ⃗wjump « ⃗wplaying ´ ⃗wplay, but ⃗wsing ´ ⃗ws ff⃗wplaying ´ ⃗wplay. As a high level description, we first learn all possible affix transformations (morphological rules) in the language from pairs of words from an orthographic standpoint. For example, the pair (“jump”, “jumping”) corresponds to the valid affix transformation φ suffix ÝÝÝÑ “ing” (where φ represents the empty string), and the pair (“slow”, “slogan”) corresponds to the invalid rule “w” suffix ÝÝÝÑ “gan”. Then we invalidate the rules, such as “w” suffix ÝÝÝÑ “gan”, that do not conform to the linear relation in the vector space. We also invalidate pairs of words which, due to randomness, are orthographically related via a valid rule although they are not morphologically related, such as (“on”, “only”). Now we formalize the objects we learn in MORSE and the scores (orthographic and semantic) used for validation. This constitutes the training stage. Finally, we formalize the inference stage, where we use these objects and scores to perform morpheme segmentation. 3.1 Training Stage Objects: • Rule set R made of all possible affix transformations in a language. R is populated via the following definition: Rsuffix = {aff1 suffix ÝÝÝÑ aff2: D (w1, w2) P V2, stem(w1) = stem(w2), w1 = stem(w1) + aff1, w2 = stem(w2) + aff2}, Rprefix is defined similarly for prefixes, and R = Rsuffix Y Rprefix. An example R would be equal to {φ suffix ÝÝÝÑ “ly”, φ prefix ÝÝÝÑ “un”, “ing” suffix ÝÝÝÑ “ed”,... }. • Support set SSr for a rule r P R consists of all pairs of words related via r on a surface level. SSr is populated via the following definition: SSr = {(w1, w2): w1, w2 P V, w1 rÝÑ w2}. An example support set of the rule “ing” suffix ÝÝÝÑ “ed” would be {(“playing”, “played”), (“crafting”, “crafted”),...}. Scores: • scorer orth(r) is a vocabulary-wide orthographic confidence score for rule r P R. It reflects the validity of an affix transformation in a language from an orthographic perspective. This score is evaluated as scorer orth(r) = |SSr|. • scorer sem(r) is a vocabulary-wide semantic confidence score for rule r P R. It reflects the validity of an affix transformation in a language from a semantic perspective. This score is evaluated as: scorer sem(r) = |clusterr|/|SSr|2 where clusterr = {((w1, w2), (w3, w4)): (w1, w2), (w3, w4) P SSr, ⃗w1 ´ ⃗w2 « ⃗w3 ´ ⃗w4 }. We consider ⃗w1 ´ ⃗w2 « ⃗w3 ´ ⃗w4 if cos(⃗w4, ⃗w2 ´ ⃗w1 ` ⃗w3) ą 0.1. • scorew sem((w1, w2) P SSr) is a vocabularywide semantic confidence score for a pair of words (w1, w2). The pair of words is related via r on an orthographic level, but the score reflects the validity of the morphological relation via r on a semantic level. This score is evaluated as: scorew sem((w1, w2) P SSr) = |{(w3, w4): (w3, w4) P SSr, ⃗w1 ´ ⃗w2 « ⃗w3´ ⃗w4}|/|SSr|. In other words, it is the fraction of pairs of words in the support set that exhibit a similar linear relation as (w1, w2) in the vector space. 554 • scoreloc sem((w1, w2) P SSr) is a local semantic confidence score for a pair of words (w1, w2). The pair of words is related via r on an orthographic level, but the score reflects the semantic relatedness between the pair. The score is evaluated as: scoreloc sem((w1, w2) P SSr) = cos(⃗w1, ⃗w2). 3.2 Inference Stage In this stage we perform morpheme segmentation using the knowledge gained from the first stage. We begin with some notation: let Radd = {r : r P R, r = aff1 rÝÑ aff2, aff1 = φ, aff2 ‰ φ }, Rrep = {r : r P R, r = aff1 rÝÑ aff2, aff1 ‰ φ, aff2 ‰ φ }. In other words, we divide the rules to those where an affix is added (Radd) and to those where an affix is replaced (Rrep). Given a word w to segment, we search for r˚, the solution to the following optimization problem1. The search space is limited to the rules that include w in their support set, a fairly small search space and the corresponding computation readily tractable: max r ÿ t1 scoret1ppw1, wq P SSrq ` ÿ t2 scoret2prq s. t. r P Radd scorer semprq ą tr sem scorer orthprq ą tr orth scorew semppw1, wq P SSrq ą tw sem scoreloc semppw1, wq P SSrq ą tloc sem Where t1 = {w sem, loc sem}, t2 = {r sem, r orth}, and tr sem, tr orth, tw sem, tloc sem are hyperparameters of the system. Now given r˚ = φ suffix ÝÝÝÑ suf, w1 is defined as w1 r˚ ÝÑ w. Thus the algorithm segments w Ñ w1-suf. We treat prefixes similarly. Next, the algorithm iterates over w1. Figure 1 shows the segmentation process of the word “unhealthy” based on the sequentially retrieved r˚. The reason we restrict our rule set to Radd in the optimization problem is to avoid rules such as “er” suffix ÝÝÝÑ “ing” like in (“player”, “playing”) leading to false segmentations such as “playing” Ñ “playering”. Yet we cannot completely restrict our search to Radd due to rules such as “y” Ñ “ies” in words like (“sky”, “skies”). To be able to segment words such as “skies”, we’d have to consider rules in Rrep 1r and w uniquely identify w1, and thus the search space is defined only over r. Figure 1: Illustration of the iterative process of segmentation in MORSE but only after searching in Radd. Thus if the first optimization problem was unfeasible, we repeat it while replacing Radd with Rrep. The program terminates when both optimization problems are infeasible. 4 Experiments We conduct a variety of experiments to assess the performance of MORSE, and compare it with prior works. First, the performance is assessed intrinsically on the task of morpheme segmentation and against the most widely used morpheme segmenter: Morfessor 2.0. We evaluate the performance across three languages of varying morphology levels: English, Turkish, Finnish, with Finnish being the richest in morphology and English being the poorest. Second, we show the inadequacies of benchmarking gold datasets for this task and describe a new dataset that we create to address the inadequacy. Third, in order to highlight the effect of including semantic information, we compare MORSE against Morfessor on a set of words which should not be segmented from a semantic perspective although orthographically they seem to be segmentable (such as “freshman”). In all of our experiments (unless specified otherwise), we report precision and recall (and corresponding F1 scores) with locations of morpheme boundaries being considered positives and the rest of the locations considered negatives. It should be noted that we disregard starting and ending positions of words, since they form trivial boundaries (Virpioja et al., 2011). 4.1 Setup Both systems, Morfessor and MORSE, were trained on the same monolingual corpus: Wikipedia2 (as of September 20, 2016) to control for affecting factors within the experiment. For each language considered, the respective Wikipedia dump was preprocessed using an available code3. We use Word2Vec (Mikolov 2https://dumps.wikimedia.org 3https://github.com/bwbaugh/ wikipedia-extractor 555 Dataset En Fi Tr Tuning Data 1000 1000 971 Test Data 686 760 809 Table 1: Morpho Challenge 2010 Dataset Sizes. et al., 2013a) to train word representations of 300 dimensions and based on a context window of size 5. Also, for computational efficiency, MORSE was limited to a vocabulary of size 1M, a restriction not enforced on Morfessor. MORSE’s hyperparameters are tuned based on a tuning set of gold morpheme segmentations. We have publicly released the source code of a pretrained MORSE4 as described in this paper. 4.2 Morpho Challenge Dataset As our first intrinsic experiment, we consider the Morpho Challenge (MC) gold segmentations available online5. For every language, two datasets are supplied: training and development. For the purpose of our experiments, all systems use the development dataset as a test dataset, and the training dataset is used for tuning MORSE’s hyperparameters. MC dataset sizes are reported in Table 1. 4.3 Semantically Driven Dataset There are a variety of weaknesses in the MC dataset, specifically related to whether the segmentation is semantically appropriate or not. We introduce a new semantically driven dataset (SD17) for morpheme segmentation along with the methodology used for creation; this new dataset is publicly available in the canonical6 and non-canonical7 versions (Cotterell and Vieira, 2016). Non-compositional segmentation: One of the key requirements of morpheme segmentation is the compositionality of the meaning of the word from the meaning of its morphemes. This requirement is violated on multiple occasions in the MC dataset. One example from Table 2 is segmenting the word “business” into “busi-ness”, which falsely assumes that “business” means the act of being busy. Such a segmentation might be consistent with the historic origin of the word, but with 4https://goo.gl/w4r7vP 5http://research.ics.aalto.fi/events/ morphochallenge2010 6https://goo.gl/MgKfG1 7https://goo.gl/0vTXVt Word Gold Segmentation freshman fresh man airline air line business’ busi ness ’ ahead a head adultery adult ery Table 2: Examples of gold morpheme segmentations from the Morpho Challenge 2010 dataset deemed invalid from a compositionality viewpoint. radical semantic changes over time, the segmentation no longer semantically represents the compositionality of the words’ components (Wijaya and Yeniterzi, 2011). Not only does such a weakness contribute to false segmentations, but it also favors segmentation methods following the MDL principle. Trivial instances: The second weakness in the MC dataset is due to abundance of trivial instances. These instances lack discriminating capability since all methods can easily predict them (Baker, 2001). These instances are comprised of genetive cases (such as teacher’s) as well as hyphenated words (such as turning-point). For genetive cases, segmenting at the apostrophe leads to perfect precision and recall, and thus such instances are deemed trivial. In the case of hyphenated words, segmenting at the hyphen is a correct segmentation with a very high probability. In the MC tuning dataset, in 43 times out of 46, the hyphen was a correct indication of segmentation. Other issues exist in the Morpho Challenge dataset although less abundantly. There are instances of wrong segmentations possibly due to human error. One example of such instance is “turning-point” segmented to “turning - point” instead of “turn ing - point”. Another issue, which is hard to avoid, is ambiguity of segmentation boundaries. Take for example the word “strafed”, the segmentations “straf-ed” and “strafe-d” are equally justified. In such situations, the MC dataset favors complete affixes rather than complete lemmas. This also favors MDL-based segmenters. We note that the MC dataset also provides segmentations in a canonical version such as “strafe-ed”, yet for the sake of a fair comparison with Morfessor and all previously evaluated systems on the MC dataset, we consider only the former version of segmentations. 556 English Turkish Finnish P R F1 P R F1 P R F1 Morfessor 74.46 56.66 64.35 40.81 25.00 31.01 43.09 28.16 34.06 MORSE 81.98 61.57 70.32 49.90 30.78 38.07 36.26 9.44 14.98 Table 3: Performance of MORSE on the MC dataset across three languages: English, Turkish, Finnish. Due to these reasons, we create a new dataset SD17 for English gold morpheme segmentations with compositionality guiding the annotations. We select 2000 words randomly from the 10K most frequent words in the English Wikipedia dump and have them annotated by two proficient English speakers. The segmentation criterion was to segment the word to the largest extent possible while preserving its compositionality from the segments. The inter-annotator agreement reached 91% on a word level. Based on post annotation discussions, annotators agreed on 99% of the words, and words not agreed on were eliminated along with words containing non-alpha characters to avoid trivial instances. SD17 is used to evaluate the performance of both Morfessor and MORSE. We claim that the performance on SD17 is a better indication of the performance of a morpheme segmenter. By the use of SD17 we expect to gain insights on the extent to which morpheme segmentation is a function of semantics in addition to orthography. 4.4 Handling Compositionality We have hypothesized that following the MDL principle (such as Morfessor) leads to oversegmentation. This over-segmentation happens specifically when the meaning of the word does not follow from the meaning of its morphemes. Examples include words such as “red head”, “duck face”, “how ever”, “s ing”. A subset of these words are defined by linguists as exocentric compounds (Bauer, 2008). MORSE does not suffer from this issue owing to its use of a semantic model. We use a collection of 100 English words which appear to be segmentable but actually are not (example: “however”). Such a collection will highlight a system’s capability of distinguishing frequent letter sequences from the semantic contribution of this letter sequence in a word. We make this collection publicly available8. 8https://goo.gl/EFbacj En Tr Fi Candidate Rules 27.5M 14.9M 10.8M Candidate Rel. Pairs 53.3M 25.1M 18.6M Table 4: Number of candidate rules and candidate related word pairs detected per language. 5 Results We compare MORSE with Morfessor, and place the performance alongside the state-of-the-art published results. 5.1 Morpho Challenge Dataset As demonstrated in Table 3, MORSE performs better than Mofessor on English and Turkish, and worse on Finnish. Considering English first, using MORSE instead of Morfessor, resulted in a 6% absolute increase in F1 scores. This supports our claim for the need of semantic cues in morpheme segmentation, and also validates the method used in this paper. Since English is a less systematic language in terms of the orthographic structure of words, semantic cues are of greater need, and hence a system which relies on semantic cues is expected to perform better; indeed this is the case. Similarly, MORSE performs better on Turkish with a 7% absolute margin in terms of F1 score. On the other hand, Morfessor surpasses MORSE in performance on Finnish by a large margin as well, especially in terms of recall. 5.1.1 Discussion We hypothesize that the richness of morphology in Finnish led to suboptimal performance of MORSE. This is because richness in morphology leads to word level sparsity which directly leads to: (1) Degradation of quality of word representations (2) Increased vocabulary size exacerbating the issue of limited vocabulary (recall MORSE was limited to a vocabulary of 1M). In a language with productive morphology, limiting its vocabulary results in a lower chance of finding morphologically related word pairs. This negatively im557 pacts the training stage of MORSE which relies on the availability of such pairs. In order to detect the suffix “ly” from the word “cheerfully” MORSE needs to come across “cheerful” as well. Coming across “cheerful” is now a lower probability event due to high sparsity. This is not as much of an issue for Morfessor under the MDL principle, since it might detect “ly” just by coming across multiple words ending with “ly” even without encountering the base forms of those words. We show how the detection of rules is affected by considering the number of candidate rules detected as well as the number of candidate morphologically related word pairs detected. As shown in Table 4, the number of detected candidate rules and candidate related words decreases with the increase in morphology in a language. This confirms our hypothesis; we note that this issue can be directly attributed to the limited vocabulary size in MORSE. With the increase in processing power, and thus larger vocabulary coverage, MORSE is expected to perform better. 5.2 Semantically Driven Dataset The performance of MORSE and Morfessor on SD17 is shown in Table 5. The use of MC data (which does not adhere to the compositionality principle) to tune MORSE to be evaluated on SD17 (which does adhere to the compositionality principle) is not optimal. Thus, we evaluate MORSE on SD17 using 5-fold cross validation, where 80% of the dataset is used to tune and 20% is used to evaluate. Precision, Recall, and F1 scores are averaged and reported in Table 5 using the label MORSE-CV. Based on the results in Table 5, we make the following observations. Comparing MORSE-CV to MORSE reflects the fundamental difference between SD17 and MC datasets. Knowing the basis of construction of SD17 and the fundamental weaknesses in MC datasets, we attribute the performance increase to the lack of compositionality in MC dataset. Comparing MORSE-CV to Morfessor, we observe a significant jump in performance (an increase of 24%). In comparison, the increase on the MC dataset (6%) shows that the Morpho Challenge dataset underestimates the performance gap between Morfessor and MORSE due its inherent weaknesses. Since MORSE is equipped with the capability to retrieve full morphemes even when not present P R F1 Morfessor 65.95 51.13 57.60 MORSE 75.35 83.60 79.26 MORSE-CV 84.6 78.36 81.29 Table 5: Performance of MORSE against Morfessor on the non-canonical version of SD17 P R F1 Morfessor 65.61 50.87 57.31 MORSE 79.70 82.37 81.01 MORSE-CV 85.08 82.90 83.96 Table 6: Performance of MORSE against Morfessor on the canonical version of SD17 in full orthographically, a capability that Morfessor lacks, we evaluated both systems on the canonical version of SD17. The results are reported in Table 6. We notice that evaluating on the canonical form of SD17 gives a further edge for MORSE over Morfessor. For evaluation on the canonical version of SD17, we switch to morpheme-level evaluation instead of boundary-level as a more suitable method for Morfessor. Morpheme-level evaluation is distinguished from boundary-level evaluation in that we evaluate the detection of morphemes instead of the boundary locations in the segmented word. We next compare MORSE against published state-of-the-art results9. As one can see in Table 7 MORSE significantly performs better than published state-of-the-art results, most notably (Narasimhan et al., 2015) referred to as LLSM in the Table. Comparison is also made against the top results in the latest Morpho Challenge: Morfessor S+W and Morfessor S+W+L (Kohonen et al., 2010), and Base Inference (Lignos, 2010). P R F1 MORSE 84.6 78.36 81.29 LLSM 80.70 72.20 76.2 Morfessor S+W 65.62 69.28 67.40 Morfessor S+W+L 67.87 66.43 67.14 Base Inference 80.77 53.76 64.55 Table 7: Performance of MORSE against published state-of-the-art results 558 Figure 2: Precision (left) and Recall (right) of MORSE as a function of the hyperparameters: tr sem, tw sem 5.3 Handling Compositionality We compare the performance of MORSE and Morfessor on a set of words made up of morphemes which don’t compose the meaning of the word. Since all the boundaries in this dataset are negative, to evaluate both systems (with MORSE tuned on SD17), we only report the number of segments generated. The more segments a system generates, the worse is its performance. We find that MORSE generates 7 false morphemes whereas Morfessor generates 43 false morphemes. This shows MORSE’s robustness to such examples through its semantic knowledge and validates our claim that Morfessor oversegments on such examples. 6 Discussion One of the benefits of MORSE against other frameworks such as MDL is its ability to identify the lemma within the segmentation. The lemma would be the last non-segmented word in the iterative process of segmentation. Hence, an advantage of our framework is its easy adaptability into a lemmatizer and even a stemmer. Another key aspect which is not present in some of the competitive systems is the need for a small tuning dataset. This is a point in favor of completely unsupervised systems such as Morfessor. On the other hand, these hyperparameters could allow for flexibility. Figure 2 shows how precision and recall changes as a function of the hyperparameter selection10. As one would expect, increasing the hyperparameters, in general, leads 9The five published state-of-the-art results are on different datasets 10Only a subset of the hyperparameters is used for display purposes to a stricter search space and thus increases precision and decreases recall. Putting these results in perspective, the user of MORSE is given the capability of controlling for precision and recall based on the needs of the downstream task. Moreover, to check for the level of dependency of MORSE on a set of gold morpheme segmentations for tuning, we check for the variation in performance with respect to size of tuning data. For the purpose of this experiment we take an 8020 split of SD17 and vary the size of the tuning set. We notice that the performance (81.90% F1) reaches a steady state at 20% (« 300 gold segmentations) of the tuning data. This reflects the minimal dependency on a tuning dataset. Regarding the training stage, homomorphs are treated as one rule and allomorphs are treated as separate rules. For example, (“tall”, “taller”) and (“fast”, “faster”) are wrongly considered to have the same morphological relation, besides (“cat”, “cats”) and (“butterfly”, “butterflies”) are wrongly considered to have different morphological relations. The separate clustering of the different forms of a homomorph leads to the underestimation of the respective orthographic scores. Moreover, the clustering of allomorphs together would lead to the underestimation of the semantic score of the rule as well as the underestimation of the vocabulary-wide semantic score of word pairs in the support set of this rule. This does not significantly affect the performance of MORSE, since the tuned thresholds are able to distinguish between the low scores of an invalid rule and the mediocre underestimated scores of allomorphs and homomorphs. As for the inference stage of MORSE, the greedy inference approach limits its performance. In other words, a wrong segmentation at the be559 ginning will propagate and result in consequent wrong segmentations. Also, MORSE’s limitation to concatenative morphology decreases its efficacy on languages that include non-concatenative morphology. This opens the stage for further research on a more optimal inference stage and a more global modeling of orthographic morphological transformations. 7 Conclusions and Future Work In this paper, we have presented MORSE, a first morpheme segmenter to consider semantic structure at this scale (local and vocabulary-wide). We show its superiority over state-of-the-art algorithms using intrinsic evaluation on a variety of languages. We also pinpointed the weaknesses in current benchmarking datasets, and presented a new dataset free of these weaknesses. With a relative increase in performance reaching 24% absolute increase over Morfessor, this work proves the significance of semantic cues as well as validates a new state-of-the-art morpheme segmenter. For future work, we plan to address the limitations of MORSE: minimal supervision, greedy inference, and concatenative orthographic model. Moreover, we plan to computationally optimize the training stage for the sake of wider adoption by the community. Acknowledgements This work is supported in part by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) - a research collaboration as part of the IBM Cognitive Horizons Network. References RH Baayen, R Piepenbrock, and H Van Rijn. 1993. The CELEX lexical database [cd-rom] Philadelphia: University of Pennsylvania. Linguistic Data Consortium . Frank B Baker. 2001. The basics of item response theory. ERIC. Marco Baroni, Johannes Matiasek, and Harald Trost. 2002. Unsupervised discovery of morphologically related words based on orthographic and semantic similarity. In Proceedings of the ACL-02 Workshop on Morphological and Phonological LearningVolume 6. Association for Computational Linguistics, pages 48–57. Laurie Bauer. 2008. Exocentric compounds. Morphology 18(1):51–74. Jeff A Bilmes and Katrin Kirchhoff. 2003. Factored language models and generalized parallel backoff. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: companion volume of the Proceedings of HLTNAACL 2003–short papers-Volume 2. Association for Computational Linguistics, pages 4–6. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task— morphological reinflection. In Proceedings of the 2016 Meeting of SIGMORPHON. Association for Computational Linguistics, Berlin, Germany. Ryan Cotterell and Tim Vieira. 2016. A joint model of orthography and morphological segmentation. In Proceedings of NAACL-HLT. pages 664–669. Mathias Creutz and Krista Lagus. 2002. Unsupervised discovery of morphemes. In Proceedings of the ACL-02 Workshop on Morphological and Phonological Learning-Volume 6. Association for Computational Linguistics, pages 21–30. Mathias Creutz and Krista Lagus. 2005. Unsupervised morpheme segmentation and morphology induction from text corpora using Morfessor 1.0. Helsinki University of Technology. Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing (TSLP) 4(1):3. John Goldsmith. 2000. Linguistica: An automatic morphological analyzer. In Proceedings of 36th meeting of the Chicago Linguistic Society. Stig-Arne Gr¨onroos, Sami Virpioja, Peter Smit, and Mikko Kurimo. 2014. Morfessor Flatcat: An HMM-based method for unsupervised and semisupervised learning of morphology. In COLING. pages 1177–1185. Zellig S Harris. 1970. From phoneme to morpheme. In Papers in Structural and Transformational Linguistics, Springer, pages 32–67. Mark Johnson, Thomas L Griffiths, and Sharon Goldwater. 2006. Adaptor grammars: A framework for specifying compositional nonparametric bayesian models. In Advances in neural information processing systems. pages 641–648. Oskar Kohonen, Sami Virpioja, and Krista Lagus. 2010. Semi-supervised learning of concatenative morphology. In Proceedings of the 11th Meeting of the ACL Special Interest Group on Computational Morphology and Phonology. Association for Computational Linguistics, pages 78–86. Mikko Kurimo, Mathias Creutz, and Ville T Turunen. 2007. Unsupervised morpheme analysis evaluation by IR experiments-Morpho Challenge 2007. In CLEF (Working Notes). 560 Mikko Kurimo, Mathias Creutz, Matti Varjokallio, Ebru Arisoy, and Murat Sarac¸lar. 2006. Unsupervised segmentation of words into morphemes– challenge 2005: An introduction and evaluation report. In Proceedings of the PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the eighteenth International Conference on Machine Learning, ICML. volume 1, pages 282–289. Young-Suk Lee. 2004. Morphological analysis for statistical machine translation. In Proceedings of HLTNAACL 2004: Short Papers. Association for Computational Linguistics, pages 57–60. Constantine Lignos. 2010. Learning from unseen data. In Proceedings of the Morpho Challenge 2010 Workshop. pages 35–38. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In HLT-NAACL. volume 13, pages 746–751. Karthik Narasimhan, Regina Barzilay, and Tommi Jaakkola. 2015. An unsupervised method for uncovering morphological chains. Transactions of the Association for Computational Linguistics 3. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532– 1543. Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 209–217. Jorma Rissanen. 1998. Stochastic complexity in statistical inquiry, volume 15. World scientific. Teemu Ruokolainen, Oskar Kohonen, Sami Virpioja, and Mikko Kurimo. 2013. Supervised morphological segmentation in a low-resource learning setting using conditional random fields. In CoNLL. pages 29–37. Teemu Ruokolainen, Oskar Kohonen, Sami Virpioja, and Mikko Kurimo. 2014. Painless semi-supervised morphological segmentation using conditional random fields. In EACL. pages 84–89. Patrick Schone and Daniel Jurafsky. 2000. Knowledge-free induction of morphology using latent semantic analysis. In Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational Natural Language Learning-Volume 7. Association for Computational Linguistics, pages 67–72. Kairit Sirts and Sharon Goldwater. 2013. Minimallysupervised morphological segmentation using adaptor grammars. Transactions of the Association for Computational Linguistics 1:255–266. Radu Soricut and Franz Josef Och. 2015. Unsupervised morphology induction using word embeddings. In HLT-NAACL. pages 1627–1637. Ahmet ¨Ust¨un and Burcu Can. 2016. Unsupervised morphological segmentation using neural word embeddings. In International Conference on Statistical Language and Speech Processing. Springer, pages 43–53. Sami Virpioja, Ville T Turunen, Sebastian Spiegler, Oskar Kohonen, and Mikko Kurimo. 2011. Empirical comparison of evaluation methods for unsupervised learning of morphology. TAL 52(2):45–90. Sami Virpioja, Jaakko J V¨ayrynen, Mathias Creutz, and Markus Sadeniemi. 2007. Morphology-aware statistical machine translation based on morphs induced in an unsupervised manner. Machine Translation Summit XI 2007:491–498. Derry Tanti Wijaya and Reyyan Yeniterzi. 2011. Understanding semantic change of words over centuries. In Proceedings of the 2011 International Workshop on DETecting and Exploiting Cultural diversiTy on the Social Web. ACM, New York, NY, USA, DETECT ’11, pages 35–40. https://doi.org/10.1145/2064448.2064475. Yuri L Zieman and Howard L Bleich. 1997. Conceptual mapping of user’s queries to medical subject headings. In Proceedings of the AMIA Annual Fall Symposium. American Medical Informatics Association, page 519. 561
2017
51
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 562–570 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1052 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 562–570 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1052 Deep Pyramid Convolutional Neural Networks for Text Categorization Rie Johnson RJ Research Consulting Tarrytown, NY, USA [email protected] Tong Zhang Tencent AI Lab Shenzhen, China [email protected] Abstract This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent longrange associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization. 1 Introduction Text categorization is an important task whose applications include spam detection, sentiment classification, and topic classification. In recent years, neural networks that can make use of word order have been shown to be effective for text categorization. While simple and shallow convolutional neural networks (CNNs) (Kim, 2014; Johnson and Zhang, 2015a) were proposed for this task earlier, more recently, deep and more complex neural networks have also been studied, assuming availability of relatively large amounts of training data (e.g., one million documents). Examples are deep character-level CNNs (Zhang et al., 2015; Conneau et al., 2016), a complex combination of CNNs and recurrent neural networks (RNNs) (Tang et al., 2015), and RNNs in a wordsentence hierarchy (Yang et al., 2016). A CNN is a feedforward network with convolution layers interleaved with pooling layers. Essentially, a convolution layer converts to a vector every small patch of data (either the original data such as text or image or the output of the previous layer) at every location (e.g., 3-word windows around every word), which can be processed in parallel. By contrast, an RNN has connections that form a cycle. In its typical application to text, a recurrent unit takes words one by one as well as its own output on the previous word, which is parallel-processing unfriendly. While both CNNs and RNNs can take advantage of word order, the simple nature and parallel-processing friendliness of CNNs make them attractive particularly when large training data causes computational challenges. There have been several recent studies of CNN for text categorization in the large training data setting. For example, in (Conneau et al., 2016), very deep 32-layer character-level CNNs were shown to outperform deep 9-layer character-level CNNs of (Zhang et al., 2015). However, in (Johnson and Zhang, 2016), very shallow 1-layer word-level CNNs were shown to be more accurate and much faster than the very deep characterlevel CNNs of (Conneau et al., 2016). Although character-level approaches have merit in not having to deal with millions of distinct words, shallow word-level CNNs turned out to be superior even 562 when used with only a manageable number (30K) of the most frequent words. This demonstrates the basic fact – knowledge of word leads to a powerful representation. These results motivate us to pursue an effective and efficient design of deep wordlevel CNNs for text categorization. Note, however, that it is not as simple as merely replacing characters with words in character-level CNNs; doing so rather degraded accuracy in (Zhang et al., 2015). We carefully studied deepening of word-level CNNs in the large-data setting and found a deep but low-complexity network architecture with which the best accuracy can be obtained by increasing the depth but not the order of computation time – the total computation time is bounded by a constant. We call it deep pyramid CNN (DPCNN), as the computation time per layer decreases exponentially in a ‘pyramid shape’. After converting discrete text to continuous representation, the DPCNN architecture simply alternates a convolution block and a downsampling layer over and over1, leading to a deep network in which internal data size (as well as per-layer computation) shrinks in a pyramid shape. The network depth can be treated as a meta-parameter. The computational complexity of this network is bounded to be no more than twice that of one convolution block. At the same time, as described later, the ‘pyramid’ enables efficient discovery of long-range associations in the text (and so more global information), as the network is deepened. This is why DPCNN can achieve better accuracy than the shallow CNN mentioned above (hereafter ShallowCNN), which can use only short-range associations. Moreover, DPCNN can be regarded as a deep extension of ShallowCNN, which we proposed in (Johnson and Zhang, 2015b) and later tested with large datasets in (Johnson and Zhang, 2016). We show that DPCNN with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic classification. 2 Word-level deep pyramid CNN (DPCNN) for text categorization Overview of DPCNN: DPCNN is illustrated in Figure 1a. The first layer performs text region embedding, which generalizes commonly used word 1Previous deep CNNs (either on image or text) tend to be more complex and irregular, having occasional increase of the number of feature maps. embedding to the embedding of text regions covering one or more words. It is followed by stacking of convolution blocks (two convolution layers and a shortcut) interleaved with pooling layers with stride 2 for downsampling. The final pooling layer aggregates internal data for each document into one vector. We use max pooling for all pooling layers. The key features of DPCNN are as follows. • Downsampling without increasing the number of feature maps (dimensionality of layer output, 250 in Figure 1a). Downsampling enables efficient representation of long-range associations (and so more global information) in the text. By keeping the same number of feature maps, every 2-stride downsampling reduces the per-block computation by half and thus the total computation time is bounded by a constant. • Shortcut connections with pre-activation and identity mapping (He et al., 2016) for enabling training of deep networks. • Text region embedding enhanced with unsupervised embeddings (embeddings trained in an unsupervised manner) (Johnson and Zhang, 2015b) for improving accuracy. 2.1 Network architecture Downsampling with the number of feature maps fixed After each convolution block, we perform max-pooling with size 3 and stride 2. That is, the pooling layer produces a new internal representation of a document by taking the component-wise maximum over 3 contiguous internal vectors, representing 3 overlapping text regions, but it does this only for every other possible triplet (stride 2) instead of all the possible triplets (stride 1). This 2-stride downsampling reduces the size of the internal representation of each document by half. A number of models (Simonyan and Zisserman, 2015; He et al., 2015, 2016; Conneau et al., 2016) increase the number of feature maps whenever downsampling is performed, causing the total computational complexity to be a function of the depth. In contrast, we fix the number of feature maps, as we found that increasing the number of feature maps only does harm – increasing computation time substantially without accuracy improvement, as shown later in the experiments. 563 3 conv, 250 Region embedding 3 conv, 250 3 conv, 250 Pooling, /2 3 conv, 250 + Pooling + Downsampling Repeat Unsupervised embeddings conv:W pre-activation optional “A good buy !” Downsampling Repeat W σ(x)+b activation optional (a) Our proposed model DPCNN Pooling Region embedding Unsupervised embeddings “A good buy !” (b) cf. ShallowCNN [JZ15b] 3x3 conv, 64 3x3 conv, 64 3x3 conv, 128, /2 3x3 conv, 128 + 3x3 conv,128 3x3 conv, 128 + 3x3 conv, 256, /2 3x3 conv, 256 + + Repeat snipped Repeat 7x7 conv, 64, /2 pooling, /2 image Repeat (c) cf. ResNet for image [HZRS15] Figure 1: (a) Our proposed model DPCNN. (b,c) Previous models for comparison. ⊕indicates addition. The dotted red shortcuts in (c) perform dimension matching. DPCNN is dimension-matching free. With the number of feature maps fixed, the computation time for each convolution layer is halved (as the data size is halved) whenever 2-stride downsampling is performed, thus, forming a ‘pyramid’. Computation per layer is halved after every pooling. Computation per layer is halved after every pooling. Therefore, with DPCNNs, the total computation time is bounded by a constant – twice the computation time of a single block, which makes our deep pyramid networks computationally attractive. In addition, downsampling with stride 2 essentially doubles the effective coverage (i.e., coverage in the original document) of the convolution kernel; therefore, after going through downsampling L times, associations among words within a distance in the order of 2L can be represented. Thus, deep pyramid CNN is computationally efficient for representing long-range associations and so more global information. Shortcut connections with pre-activation To enable training of deep networks, we use additive shortcut connections with identity mapping, which can be written as z + f(z) where f represents the skipped layers (He et al., 2016). In DPCNN, the skipped layers f(z) are two convolution layers with pre-activation. Here, pre-activation refers to activation being done before weighting instead of after as is typically done. That is, in the convolution layer of DPCNN, Wσ(x) + b is computed at every location of each document where a column vector x represents a small region (overlapping with each other) of input at each location, σ(·) is a component-wise nonlinear activation, and weights W and biases b (unique to each layer) are the parameters to be trained. The number of W’s rows is the number of feature maps (also called the number of filters (He et al., 2015)) of this layer. We set activation σ(·) to the rectifier σ(x) = max(x, 0). In our implementation, we fixed the number of feature maps to 250 and the kernel size (the size of the small region covered by x) to 3, as shown in Figure 1a. With pre-activation, it is the results of linear weighting (Wσ(x) + b) that travel through the shortcut, and what is added to them at a ⊕(Figure 1a) is also the results of linear weighting, instead of the results of nonlinear activation (σ(Wx + b)). Intuitively, such ‘linearity’ eases training of deep networks, similar to the role of constant error carousels in LSTM (Hochreiter and Schmidhuder, 1997). We empirically observed that preactivation indeed outperformed ‘post-activation’, which is in line with the image results (He et al., 2016). No need for dimension matching Although the shortcut with pre-activation was adopted from the improved ResNet of (He et al., 2016), our model is simpler than ResNet (Figure 1c), as all the 564 shortcuts are exactly simple identity mapping (i.e., passing data exactly as it is) without any complication for dimension matching. When a shortcut meets the ‘main street’, the data from two paths need to have the same dimensionality so that they can be added; therefore, if a shortcut skips a layer that changes the dimensionality, e.g., by downsampling or by use of a different number of feature maps, then a shortcut must perform dimension matching. Dimension matching for increased number of feature maps, in particular, is typically done by projection, introducing more weight parameters to be trained. We eliminate the complication of dimension matching by not letting any shortcut skip a downsampling layer, and by fixing the number of feature maps throughout the network. The latter also substantially saves computation time as mentioned above, and we will show later in our experiments that on our tasks, we do not sacrifice anything for such a substantial efficiency gain. 2.2 Text region embedding A CNN for text categorization typically starts with converting each word in the text to a word vector (word embedding). We take a more general viewpoint as in (Johnson and Zhang, 2015b) and consider text region embedding – embedding of a region of text covering one or more words. Basic region embedding We start with the basic setting where there is no unsupervised embedding. In the region embedding layer we compute Wx + b for each word of a document where input x represents a k-word region (i.e., window) around the word in some straightforward manner, and weights W and bias b are trained with the parameters of other layers. Activation is delayed to the pre-activation of the next layer. Now let v be the size of vocabulary, and let us consider the following three types of straightforward representation of a k-word region for x: (1) sequential input: the kv-dimensional concatenation of k one-hot vectors; (2) bow input: a v-dimensional bag-of-word (bow) vector; and (3) bag-of-n-gram input: e.g., a bag of word uni, bi, and trigrams contained in the region. Setting the region size k = 1, they all become word embedding. A region embedding layer with the sequential input is equivalent to a convolution layer applied to a sequence of one-hot vectors representing a document, and this viewpoint was taken to describe the first layer of ShallowCNN in (Johnson and Zhang, 2015a,b). From the region embedding viewpoint, ShallowCNN is DPCNN’s special case in which a region embedding layer is directly followed by the final pooling layer (Figure 1b). A region embedding layer with region size k > 1 seeks to capture more complex concepts than single words in one weight layer, whereas a network with word embedding uses multiple weight layers to do this, e.g., word embedding followed by a convolution layer. In general, having fewer layers has a practical advantage of easier optimization. Beyond that, the optimum input type and the optimum region size can only be determined empirically. Our preliminary experiments indicated that when used with DPCNN (but not ShallowCNN), the sequential input has no advantage over the bow input – comparable accuracy with k times more weight parameters; therefore, we excluded the sequential input from our experiments2. The n-gram input turned out to be prone to overfitting in the supervised setting, likely due to its high representation power, but it is very useful as the input to unsupervised embeddings, which we discuss next. Enhancing region embedding with unsupervised embeddings In (Johnson and Zhang, 2015b, 2016), it was shown that accuracy was substantially improved by extending ShallowCNN with unsupervised embeddings obtained by tvembedding training (‘tv’ stands for two views). We found that accuracy of DPCNN can also be improved in this manner. Below we briefly review tv-embedding training and then describe how we use the resulting unsupervised embeddings with DPCNN. The tv-embedding training requires two views. For text categorization, we define a region of text as view-1 and its adjacent regions as view-2. Then using unlabeled data, we train a neural network of one hidden layer with an artificial task of predicting view-2 from view-1. The obtained hidden layer, which is an embedding function that takes view-1 as input, serves as an unsupervised embedding function in the model for text categorization. In (Johnson and Zhang, 2015b), we showed theoretical conditions on views and labels under which 2This differs from ShallowCNN where the sequential input is often superior to bow input. We conjecture that when bow input is used in DPCNN, convolution layers following region embedding make up for the loss of local word order caused by bow input, as they use word order. 565 AG Sogou Dbpedia Yelp.p Yelp.f Yahoo Ama.f Ama.p # of training documents 120K 450K 560K 560K 650K 1.4M 3M 3.6M # of test documents 7.6K 60K 70K 38K 50K 60K 650K 400K # of classes 4 5 14 2 5 10 5 2 Average #words 45 578 55 153 155 112 93 91 Table 1: Data. Note that Yelp.f is a balanced subset of Yelp 2015. The results on these two datasets are not comparable. unsupervised embeddings obtained this way are useful for classification. For use with DPCNN, we train several unsupervised embeddings in this manner, which differ from one another in the region size and the vector representations of view-1 (input region) so that we can benefit from diversity. The region embedding layer of DPCNN computes Wx + P u∈U W(u)z(u) + b , where x is the discrete input as in the basic region embedding, and z(u) is the output of an unsupervised embedding function indexed by u. We will show below that use of unsupervised embeddings in this way consistently improves the accuracy of DPCNN. 3 Experiments We report the experiments with DPCNNs in comparison with previous models and alternatives. The code is publicly available on the internet. 3.1 Experimental setup Data and data preprocessing To facilitate comparisons with previous results, we used the eight datasets compiled by Zhang et al. (2015), summarized in Table 1. AG and Sogou are news. Dbpedia is an ontology. Yahoo consists of questions and answers from the ‘Yahoo! Answers’ website. Yelp and Amazon (‘Ama’) are reviews where ‘.p’ (polarity) in the names indicates that labels are binary (positive/negative), and ‘.f’ (full) indicates that labels are the number of stars. Sogou is in Romanized Chinese, and the others are in English. Classes are balanced on all the datasets. Data preprocessing was done as in (Johnson and Zhang, 2016). That is, upper-case letters were converted to lower-case letters. Unlike (Kim, 2014; Zhang et al., 2015; Conneau et al., 2016), variable-sized documents were handled as variable-sized without any shortening or padding; however, the vocabulary size was limited to 30K words. For example, as also mentioned in (Johnson and Zhang, 2016), the complete vocabulary of the Ama.p training set contains 1.3M words. A vocabulary of 30K words is only a small portion of it, but it covers about 98% of the text and produced good accuracy as reported below. Training protocol We held out 10K documents from the training data for use as a validation set on each dataset, and meta-parameter tuning was done based on the performance on the validation set. To minimize a log loss with softmax, minibatch SGD with momentum 0.9 was conducted for n epochs (n was fixed to 50 for AG, 30 for Yelp.f/p and Dbpedia, and 15 for the rest) while the learning rate was set to η for the first 4 5n epochs and then 0.1η for the rest3. The initial learning rate η was considered to be a meta-parameter. The minibatch size was fixed to 100. Regularization was done by weight decay with the parameter 0.0001 and by optional dropout (Hinton et al., 2012) with 0.5 applied to the input to the top layer. In some cases overfitting was observed, and so we performed early stopping, based on the validation performance, after reducing the learning rate to 0.1η. Weights were initialized by the Gaussian distribution with zero mean and standard deviation 0.01. The discrete input to the region embedding layer was fixed to the bow input, and the region size was chosen from {1,3,5}, while fixing output dimensionality to 250 (same as convolution layers). Details of unsupervised embedding training To facilitate comparison with ShallowCNN, we matched our unsupervised embedding setting exactly with that of (Johnson and Zhang, 2016). That is, we trained the same four types of tvembeddings, which are embeddings of 5- and 9word regions, each of which represents the input regions by either 30K-dim bow or 200K-dim 3This learning rate scheduling method was used also in (Johnson and Zhang, 2015a,b, 2016). It was meant to reduce learning rate when error plateaus, as is often done on image tasks, e.g., (He et al., 2015), though for simplicity, the timing of reduction was fixed for each dataset. 566 Models Deep Unsup. Yelp.p Yelp.f Yahoo Ama.f Ama.p embed. 1 DPCNN + unsupervised embed. ✓ tv 2.64 30.58 23.90 34.81 3.32 2 ShallowCNN + unsup. embed. [JZ16] tv 2.90 32.39 24.85 36.24 3.79 3 Hierarchical attention net [YYDHSH16] ✓ w2v – – 24.2 36.4 – 4 [CSBL16]’s char-level CNN: best ✓ 4.28 35.28 26.57 37.00 4.28 5 fastText bigrams (Joulin et al., 2016) 4.3 36.1 27.7 39.8 5.4 6 [ZZL15]’s char-level CNN: best ✓ 4.88 37.95 28.80 40.43 4.93 7 [ZZL15]’s word-level CNN: best ✓ (w2v) 4.60 39.58 28.84 42.39 5.51 8 [ZZL15]’s linear model: best 4.36 40.14 28.96 44.74 7.98 Table 2: Error rates (%) on larger datasets in comparison with previous models. The previous results are roughly sorted in the order of error rates (best to worst). The best results and the second best are shown in bold and italic, respectively. ‘tv’ stands for tv-embeddings. ‘w2v’ stands for word2vec. ‘(w2v)’ in row 7 indicates that the best results among those with and without word2vec pretraining are shown. Note that ‘best’ in rows 4&6–8 indicates that we are giving an ‘unfair’ advantage to these models by choosing the best test error rate among a number of variations presented in the respective papers. [JZ16]: Johnson and Zhang (2016), [YYDHSH16]: Yang et al. (2016), [CSBL16]: Conneau et al. (2016), [ZZL15]: Zhang et al. (2015) bags of {1,2,3}-grams, retaining only the most frequent 30K words or 200K {1,2,3}-grams. Training was done on the labeled data (disregarding the labels), setting the training objectives to the prediction of adjacent regions of the same size as the input region (i.e., 5 or 9). Weighted square loss P i,j αi,j(zi[j] −pi[j])2 was minimized where i goes through instances, z represents the target regions by bow, p is the model output, and the weights αi,j were set to achieve the negative sampling effect. The dimensionality of unsupervised embeddings was set to 300 unless otherwise specified. Unsupervised embeddings were fixed during the supervised training – no fine-tuning. 3.2 Results In the results below, the depth of DPCNN was fixed to 15 unless otherwise specified. Making it deeper did not substantially improve or degrade accuracy. Note that we count as depth the number of hidden weight layers including the region embedding layer but excluding unsupervised embeddings, therefore, 15 means 7 convolution blocks of 2 layers plus 1 layer for region embedding. 3.2.1 Main results Large data results We first report the error rates of our full model (DPCNN with 15 weight layers plus unsupervised embeddings) on the larger five datasets (Table 2). To put it into perspective, we also show the previous results in the literature. The previous results are roughly sorted in the order of error rates from best to worst. On all the five datasets, DPCNN outperforms all of the previous results, which validates the effectiveness of our approach. DPCNN can be regarded as a deep extension of ShallowCNN (row 2), sharing region embedding enhancement with diverse unsupervised embeddings. Note that ShallowCNN enhanced with unsupervised embeddings (row 2) was originally proposed in (Johnson and Zhang, 2015b) as a semi-supervised extension of (Johnson and Zhang, 2015a), and then it was tested on the large datasets in (Johnson and Zhang, 2016). The performance improvements of DPCNN over ShallowCNN indicates that the added depth is indeed useful, capturing more global information. Yang et al. (2016)’s hierarchical attention network (row 3) consists of RNNs in the word level and the sentence level. It is more complex than DPCNN due to the use of RNNs and linguistic knowledge for sentence segmentation. Similarly, Tang et al. (2015) proposed to use CNN or LSTM to represent each sentence in documents and then use RNNs. Although we do not have direct comparison with Tang et al.’s model, Yang et al. (2016) reports that their model outperformed Tang et al.’s model. Conneau et al. (2016) and Zhang et al. (2015) proposed deep character-level CNNs (row 4&6). Their models underperform our DPCNN with relatively large differences in spite of their deepness. Our mod567 30 31 32 33 34 35 36 0 25 50 75 Error rate (%) Computation time Yelp.f character-leve CNN [CSBL16] ShallowCNN [JZ16] ShallowCNN+100-dim u.embed. ShallowCNN+300-dim u.embed. DPCNN DPCNN+100-dim u.embed. DPCNN+300-dim u.embed. leve CNN [CSBL16] dim u.embed. dim u.embed. dim u.embed. dim u.embed. 30 31 32 33 34 35 0 5 10 15 20 Error rate (%) Computation time Yelp.f Figure 2: Error rates and computation time. DPCNN, ShallowCNN, and Conneau et al. (2016)’s character-level CNN. The x-axis is the time in seconds spent for categorizing 10K documents using our implementation on Tesla M2070. The right figure is a close-up of x ∈[0, 20] of the left figure. Though shown on one particular dataset Yelp.f, the trend is the same on the other four large datasets. els are word-level and therefore use the knowledge of word boundaries which character-level models have no access to. While this is arguably not an apple-to-apple comparison, since word boundaries can be obtained for free in many languages, we view our model as much more useful in practice. Row 7 shows the performance of deep wordlevel CNN from (Zhang et al., 2015), which was designed to match their character-level models in complexity. Its relatively poor performance shows that it is not easy to design a high-performance deep word-level CNN. Computation time In Figure 2, we plot error rates in relation to the computation time – the time spent for categorizing 10K documents using our implementation on a GPU. The right figure is a close-up of x ∈[0, 20] of the left figure. It stands out in the left figure that the character-level CNN of (Conneau et al., 2016) is much slower than DPCNNs. This is partly because it increases the number of feature maps with downsampling (i.e., no pyramid) while it is deeper (32 weight layers), and partly because it deals with characters – there are more characters than words in each document. DPCNNs are more accurate than ShallowCNNs at the expense of more computation time due to the depth (15 layers vs. 1 layer). Nevertheless, their computation time is comparable – the points of both fit in the same range [0, 20]. The efficiency of DPCNNs is due to the exponential decrease of per-layer computation due to downsampling with the number of feature maps being fixed. Comparison with non-pyramid variants Furthermore, we tested the following two ‘nonpyramid’ models for comparison. The first model doubles the number of feature maps at every other downsampling so that per-layer computation is 31.5 32 32.5 33 0 10 20 30 Error rate (%) Computation time Yelp.f No downsampling Increase #feature maps DPCNN Increase #feature maps Figure 3: Comparison with non-pyramid models. Models of depth 11 and 15 are shown. No unsupervised embeddings. kept approximately constant4. The second model performs no downsampling. Otherwise, these two models are the same as DPCNN. We show in Figure 3 the error rates of these two variations (labeled as ‘Increase #feature maps’ and ‘No downsampling’, respectively) in comparison with DPCNN. The x-axis is the computation time, measured by the seconds spent for categorizing 10K documents. For all types, the models of depth 11 and 15 are shown. Clearly, DPCNN is more accurate and computes faster than the others. Figure 3 is on Yelp.f, and we observed the same performance trend on the other four large datasets. Small data results Now we turn to the results on the three smaller datasets in Table 3. Again, the previous models are roughly sorted from best to worst. For these small datasets, the DPCNN performances with 100-dim unsupervised embed4Note that if we double the number of feature maps, it would increase the computation cost of the next layer by 4 times as it doubles the dimensionality of both input and output. On image, downsampling with stride 2 cancels it out as it makes data 4 times smaller by shrinking both horizontally and vertically, but text is one dimensional, and so downsampling with stride 2 merely halves data. That is why we doubled the number of feature maps at every other downsampling instead of at every downsampling to avoid exponential increase of computation time. 568 Models Deep Unsup. AG Sogou Dbpedia embed. 1 DPCNN + unsupervised embed. ✓ tv 6.87 1.84 0.88 2 ShallowCNN + unsup. embed. [JZ16] tv 6.57 1.89 0.84 3 [ZZL15]’s linear model: best 7.64 2.81 1.31 4 [CSBL16]’s deep char-level CNN: best ✓ 8.67 3.18 1.29 5 fastText bigrams (Joulin et al., 2016) 7.5 3.2 1.4 6 [ZZL15]’s word-level CNN : best ✓ (w2v) 8.55 4.39 1.37 7 [ZZL15]’s deep char-level CNN: best ✓ 9.51 4.88 1.55 Table 3: Error rates (%) on smaller datasets in comparison with previous models. The previous results are roughly sorted in the order of error rates (best to worst). Notation follows that of Table 2. 3.2 3.3 3.4 3.5 0 2 4 6 8 Error rate (%) Time Yelp.p 31 32 33 34 0 2 4 6 8 Time Yelpf 24.5 25 25.5 26 0 2 4 Time Yahoo 6 Yahoo 35.5 36 36.5 37 37.5 0 1 2 3 4 5 Time Ama.f 3.6 3.8 4 4.2 0 1 2 3 4 5 Time Ama.p ShallowCNN DPCNN Figure 4: Error rates of DPCNNs with various depths (3, 7, and 15). The x-axis is computation time. No unsupervised embeddings. dings are shown, which turned out to be as good as those with 300-dim unsupervised embeddings. One difference from the large dataset results is that the strength of shallow models stands out. ShallowCNN (row 2) rivals DPCNN (row 1), and Zhang et al.’s best linear model (row 3) moved up from the worst performer to the third best performer. The results are in line with the general fact that more complex models require more training data, and with the paucity of training data, simpler models can outperform more complex ones. 3.2.2 Empirical studies We present some empirical results to validate the design choices. For this purpose, the larger five datasets were used to avoid the paucity of training data. Depth Figure 4 shows error rates of DPCNNs with 3, 7, and 15 weight layers (blue circles from left to right). For comparison, the ShallowCNN results (green ‘x’) from (Johnson and Zhang, 2016) are also shown. The x-axis represents the computation time (seconds for categorizing 10K documents on a GPU). For simplicity, the results without unsupervised embeddings are shown for all. The error rate improves as the depth increases. The results confirm the effectiveness of our strategy of deepening the network. Unsupervised embeddings To study the effectiveness of unsupervised embeddings, we experimented with variations of DPCNN that differ only in whether/how to use unsupervised embeddings (Table 4). First, we compare DPCNNs with and without unsupervised embeddings. The model with unsupervised embeddings (row 1, copied from Table 2 for easy comparison) clearly outperforms the one without them (row 4), which confirms the effectiveness of the use of unsupervised embeddings. Second, in the proposed model (row 1), a region embedding layer receives two types of input, the output of unsupervised embedding functions and the high-dimensional discrete input such as a bow vector. Row 2 shows the results obtained by using unsupervised embeddings to produce sole input (i.e., no discrete vectors provided to the region embedding layer). Degradations of error rates are up to 0.32%, small but consistent. Since the discrete input add almost no computation cost due to its sparseness, its use is desirable. Third, a number of previous studies used unsupervised word embedding to initialize word embedding in neural networks and then fine-tune it as training proceeds (pretraining). The model in row 3 does this with DPCNN using word2vec (Mikolov et al., 2013). The word2vec training was done on the training data (ignoring the labels), 569 Unsupervised embeddings Yelp.p Yelp.f Yahoo Ama.f Ama.p 1 tv-embed. (additional input) 2.64 30.58 23.90 34.81 3.32 2 tv-embed. (sole input) 2.68 30.66 24.09 35.13 3.45 3 word2vec (pretraining) 2.93 32.08 24.11 35.30 3.65 4 – 3.30 31.61 24.64 35.61 3.64 Table 4: Error rates (%) of DPCNN variations that differ in use of unsupervised embeddings. The rows are roughly sorted from best to worst. same as tv-embedding training. This model (row 3) underperformed our proposed model (row 1). We attribute the superiority of the proposed model to its use of richer information than a word embedding. These results support our approach. 4 Conclusion This paper tackled the problem of designing highperformance deep word-level CNNs for text categorization in the large training data setting. We proposed a deep pyramid CNN model which has low computational complexity, and can efficiently represent long-range associations in text and so more global information. It was shown to outperform the previous best models on six benchmark datasets. References Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann LeCun. 2016. Very deep convolutional networks for natural language processing. arXiv:1606.01781v1 (6 June 2016 version) . Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. arXiv:1512.03385 . Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Identity mappings in deep residual networks. arXiv:1603.05027 . Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580 . Sepp Hochreiter and J¨urgen Schmidhuder. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. Rie Johnson and Tong Zhang. 2015a. Effective use of word order for text categorization with convolutional neural networks. In Proceedings of the North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL HLT). Rie Johnson and Tong Zhang. 2015b. Semi-supervised convolutional neural networks for text categorization via region embedding. In Advances in Neural Information Processing Systems 28 (NIPS 2015). Rie Johnson and Tong Zhang. 2016. Convolutional neural networks for text categorization: Shallow word-level vs. deep character-level. arXiv:1609.00718 . Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv:1607.01795v3 (9 Aug 2016 version) . Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). pages 1746–1751. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013). Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of International Conference on Learning Representations (ICLR). Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL HLT). Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28 (NIPS 2015). 570
2017
52
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 571–581 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1053 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 571–581 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1053 Improved Neural Relation Detection for Knowledge Base Question Answering Mo Yu† Wenpeng Yin? Kazi Saidul Hasan‡ Cicero dos Santos† Bing Xiang‡ Bowen Zhou† †AI Foundations, IBM Research, USA ?Center for Information and Language Processing, LMU Munich ‡IBM Watson, USA {yum,kshasan,cicerons,bingxia,zhou}@us.ibm.com, [email protected] Abstract Relation detection is a core component of many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning which detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different levels of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to make the two components enhance each other. Our experimental results show that our approach not only achieves outstanding relation detection performance, but more importantly, it helps our KBQA system achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks. 1 Introduction Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples (Berant et al., 2013; Yao et al., 2014; Bordes et al., 2015; Bast and Haussmann, 2015; Yih et al., 2015; Xu et al., 2016). For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure 1 illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single <head-entity, relation, tail-entity> KB tuple (Fader et al., 2013; Yih et al., 2014; Bordes et al., 2015); and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) entity linking, which links n-grams in questions to KB entities, and (2) relation detection, which identifies the KB relation(s) a question refers to. The main focus of this work is to improve the relation detection subtask and further explore how it can contribute to the KBQA system. Although general relation detection1 methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M (Bordes et al., 2015), contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions (Bordes et al., 2015) data set has 14% of the golden test relations not observed in golden training tuples. Third, as shown in Figure 1(b), for some KBQA tasks like WebQuestions (Berant et al., 2013), we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks. This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing 1In the information extraction field such tasks are usually called relation extraction or relation classification. 571 Question: what episode was mike kelley the writer of Knowledge Base Mike Kelley (American television writer/producer) Mike Kelley (American baseball player) … Entity Linking Love Will Find a Way USA … First baseman … episodes_written position_played Relation Detection (a) (b) Question: what tv show did grant show play on in 2008 Mike Kelley ? episodes_written Entity Linking Relation Detection Grant Show ? starring_roles series (date) from 2008 Constraint Detection Grant Show (American actor) SwingTown Big Love episodes Scoundrels series 2011 from 2010 2008 Figure 1: KBQA examples and its three key components. (a) A single relation example. We first identify the topic entity with entity linking and then detect the relation asked by the question with relation detection (from all relations connecting the topic entity). Based on the detected entity and relation, we form a query to search the KB for the correct answer “Love Will Find a Way”. (b) A more complex question containing two entities. By using “Grant Show” as the topic entity, we could detect a chain of relations “starring roles-series” pointing to the answer. An additional constraint detection takes the other entity “2008” as a constraint, to filter the correct answer “SwingTown” from all candidates found by the topic entity and relation. that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching. In order to assess how the proposed improved relation detection could benefit the KBQA end task, we also propose a simple KBQA implementation composed of two-step relation detection. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the raw question text by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each topic entity2 selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above 2Following Yih et al. (2015), here topic entity refers to the root of the (directed) query tree; and core-chain is the directed path of relation from root to the answer node. steps is used to query the KB for answers. Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks. 2 Related Work Relation Extraction Relation extraction (RE) is an important sub-field of information extraction. General research in this field usually works on a (small) pre-defined relation set, where given a text paragraph and two target entities, the goal is to determine whether the text indicates any types of relations between the entities or not. As a result RE is usually formulated as a classification task. Traditional RE methods rely on large amount of hand-crafted features (Zhou et al., 2005; Rink and Harabagiu, 2010; Sun et al., 2011). Recent research benefits a lot from the advancement of deep learning: from word embeddings (Nguyen and Grishman, 2014; Gormley et al., 2015) to deep networks like CNNs and LSTMs (Zeng et al., 2014; dos Santos et al., 2015; Vu et al., 2016) and attention models (Zhou et al., 2016; Wang et al., 2016). The above research assumes there is a fixed (closed) set of relation types, thus no zero-shot learning capability is required. The number of relations is usually not large: The widely used ACE2005 has 11/32 coarse/fine-grained relations; SemEval2010 Task8 has 19 relations; TAC572 KBP2015 has 74 relations although it considers open-domain Wikipedia relations. All are much fewer than thousands of relations in KBQA. As a result, few work in this field focuses on dealing with large number of relations or unseen relations. Yu et al. (2016) proposed to use relation embeddings in a low-rank tensor method. However their relation embeddings are still trained in supervised way and the number of relations is not large in the experiments. Relation Detection in KBQA Systems Relation detection for KBQA also starts with featurerich approaches (Yao and Van Durme, 2014; Bast and Haussmann, 2015) towards usages of deep networks (Yih et al., 2015; Xu et al., 2016; Dai et al., 2016) and attention models (Yin et al., 2016; Golub and He, 2016). Many of the above relation detection research could naturally support large relation vocabulary and open relation sets (especially for QA with OpenIE KB like ParaLex (Fader et al., 2013)), in order to fit the goal of open-domain question answering. Different KBQA data sets have different levels of requirement about the above open-domain capacity. For example, most of the gold test relations in WebQuestions can be observed during training, thus some prior work on this task adopted the close domain assumption like in the general RE research. While for data sets like SimpleQuestions and ParaLex, the capacity to support large relation sets and unseen relations becomes more necessary. To the end, there are two main solutions: (1) use pre-trained relation embeddings (e.g. from TransE (Bordes et al., 2013)), like (Dai et al., 2016); (2) factorize the relation names to sequences and formulate relation detection as a sequence matching and ranking task. Such factorization works because that the relation names usually comprise meaningful word sequences. For example, Yin et al. (2016) split relations to word sequences for single-relation detection. Liang et al. (2016) also achieve good performance on WebQSP with wordlevel relation representation in an end-to-end neural programmer model. Yih et al. (2015) use character tri-grams as inputs on both question and relation sides. Golub and He (2016) propose a generative framework for single-relation KBQA which predicts relation with a character-level sequenceto-sequence model. Another difference between relation detection in KBQA and general RE is that general RE research assumes that the two argument entities are both available. Thus it usually benefits from features (Nguyen and Grishman, 2014; Gormley et al., 2015) or attention mechanisms (Wang et al., 2016) based on the entity information (e.g. entity types or entity embeddings). For relation detection in KBQA, such information is mostly missing because: (1) one question usually contains single argument (the topic entity) and (2) one KB entity could have multiple types (type vocabulary size larger than 1,500). This makes KB entity typing itself a difficult problem so no previous used entity information in the relation detection model.3 3 Background: Different Granularity in KB Relations Previous research (Yih et al., 2015; Yin et al., 2016) formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work. (1) Relation Name as a Single Token (relationlevel). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of opendomain relations. For example, in Figure 1, when treating relation names as single tokens, it will be difficult to match the questions to relation names “episodes written” and “starring roles” if these names do not appear in training data – their relation embeddings hrs will be random vectors thus are not comparable to question embeddings hqs. (2) Relation as Word Sequence (word-level). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure 1(b), when doing only word-level matching, it is difficult to rank the target relation “starring roles” higher compared to the incorrect relation “plays produced”. This is because the incorrect relation contains word “plays”, which is more similar to the question 3Such entity information has been used in KBQA systems as features for the final answer re-rankers. 573 Relation Token Question 1 Question 2 what tv episodes were <e> the writer of what episode was written by <e> relation-level episodes written tv episodes were <e> the writer of episode was written by <e> word-level episodes tv episodes episode written the writer of written Table 1: An example of KB relation (episodes written) with two types of relation tokens (relation names and words), and two questions asking this relation. The topic entity is replaced with token <e> which could give the position information to the deep networks. The italics show the evidence phrase for each relation token in the question. (containing word “play”) in the embedding space. On the other hand, if the target relation co-occurs with questions related to “tv appearance” in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like “tv show” and “play on”. The two types of relation representation contain different levels of abstraction. As shown in Table 1, the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section 4 gives the details of our proposed approach. 4 Improved KB Relation Detection This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations. 4.1 Relation Representations from Different Granularity We provide our model with both types of relation representation: word-level and relationlevel. Therefore, the input relation becomes r = {rword 1 , · · · , rword M1 } [ {rrel 1 , · · · , rrel M2}, where the first M1 tokens are words (e.g. {episode, written}), and the last M2 tokens are relation names, e.g., {episode written} or {starring roles, series} (when the target is a chain like in Figure 1(b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations [Bword 1:M1 : Brel 1:M2] (each row vector βi is the concatenation between forward/backward representations at i). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply one max-pooling on these two sets of vectors and get the final relation representation hr. 4.2 Different Abstractions of Questions Representations From Table 1, we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths. As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words q = {q1, · · · , qN} and gets hidden representations Γ(1) 1:N = [γ(1) 1 ; · · · ; γ(1) N ]. The second-layer BiLSTM works on Γ(1) 1:N to get the second set of hidden representations Γ(2) 1:N. Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer. Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem. 574 … … … max-pooling max-pooling Question Representation Relation Representation (cosine similarity) Shortcut connections Point-wise summation Bi-LSTM 2 Bi-LSTM 1 !" # !" $ Relation-Level Word-Level what tv show did <e> … starring_role series starring role series %& ' %( ) … . … . ... … ... Question Relation Figure 2: The proposed Hierarchical Residual BiLSTM (HR-BiLSTM) model for relation detection. Note that without the dotted arrows of shortcut connections between two layers, the model will only compute the similarity between the second-layer of questions representations and the relation, thus is not doing hierarchical matching. 4.3 Hierarchical Matching between Relation and Question Now we have question contexts of different lengths encoded in Γ(1) 1:N and Γ(2) 1:N. Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (Hierarchical Matching). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table 1, the relation word written could be matched to either the same single word in the question or a much longer phrase be the writer of. We could perform the above hierarchical matching by computing the similarity between each layer of Γ and hr separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table 2). Our analysis in Section 6.2 shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult. To overcome the above difficulties, we adopt the idea from Residual Networks (He et al., 2016) for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such Hierarchical Residual Matching: (1) Connecting each γ(1) i and γ(2) i , resulting in a γ 0 i = γ(1) i + γ(2) i for each position i. Then the final question representation hq becomes a maxpooling over all γ 0 is, 1iN. (2) Applying maxpooling on Γ(1) 1:N and Γ(2) 1:N to get h(1) max and h(2) max, respectively, then setting hq = h(1) max + h(2) max. Finally we compute the matching score of r given q as srel(r; q) = cos(hr, hq). Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier. During training we adopt a ranking loss to maximizing the margin between the gold relation r+ and other relations r−in the candidate pool R. lrel = max{0, γ −srel(r+; q) + srel(r−; q)} where γ is a constant parameter. Fig 2 summarizes the above Hierarchical Residual BiLSTM (HR-BiLSTM) model. 575 Remark: Another way of hierarchical matching consists in relying on attention mechanism, e.g. (Parikh et al., 2016), to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table 2). 5 KBQA Enhanced by Relation Detection This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build. Following previous work (Yih et al., 2015; Xu et al., 2016), our KBQA system takes an existing entity linker to produce the top-K linked entities, ELK(q), for a question q (“initial entity linking”). Then we generate the KB queries for q following the four steps illustrated in Algorithm 1. Algorithm 1: KBQA with two-step relation detection Input : Question q, Knowledge Base KB, the initial top-K entity candidates ELK(q) Output: Top query tuple (ˆe, ˆr, {(c, rc)}) 1 Entity Re-Ranking (first-step relation detection): Use the raw question text as input for a relation detector to score all relations in the KB that are associated to the entities in ELK(q); use the relation scores to re-rank ELK(q) and generate a shorter list EL0 K0(q) containing the top-K0 entity candidates (Section 5.1) 2 Relation Detection: Detect relation(s) using the reformatted question text in which the topic entity is replaced by a special token <e> (Section 5.2) 3 Query Generation: Combine the scores from step 1 and 2, and select the top pair (ˆe, ˆr) (Section 5.3) 4 Constraint Detection (optional): Compute similarity between q and any neighbor entity c of the entities along ˆr (connecting by a relation rc) , add the high scoring c and rc to the query (Section 5.4). Compared to previous approaches, the main difference is that we have an additional entity reranking step after the initial entity linking. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig 1(a), there are TV writer and baseball player “Mike Kelley”, which is impossible to distinguish with only entity name matching. Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the initial entity linking with relations detected in questions. Sections 5.1 and 5.2 elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process. 5.1 Entity Re-Ranking In this step, we use the raw question text as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in ELK(q). We call this step relation detection on entity set since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. 4. For each question q, after generating a score srel(r; q) for each relation using HR-BiLSTM, we use the top l best scoring relations (Rl q) to re-rank the original entity candidates. Concretely, for each entity e and its associated relations Re, given the original entity linker score slinker, and the score of the most confident relation r 2 Rl q \Re, we sum these two scores to re-rank the entities: srerank(e; q) =↵· slinker(e; q) +(1 −↵) · max r2Rlq\Re srel(r; q). Finally, we select top K0 < K entities according to score srerank to form the re-ranked list EL 0 K0(q). We use the same example in Fig 1(a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as “episodes written”, “author of” and “profession”. Then, according to the connections of entity candidates in KB, we find that the TV writer “Mike Kelley” will be scored higher than the baseball player “Mike Kelley”, because the former has the relations “episodes written” and “profession”. This method can be viewed as exploiting entity-relation collocation for entity linking. 5.2 Relation Detection In this step, for each candidate entity e 2 EL0 K(q), we use the question text as the input to a relation detector to score all the relations r 2 Re that are associated to the entity e in the KB.4 Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate e’s entity mention in 4Note that the number of entities and the number of relation candidates will be much smaller than those in the previous step. 576 q with a token “<e>”. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation r 2 Re: srel(r; e, q). 5.3 Query Generation Finally, the system outputs the <entity, relation (or core-chain)> pair (ˆe, ˆr) according to: s(ˆe, ˆr; q) = max e2EL0 K0(q),r2Re (β · srerank(e; q) +(1 −β) · srel(r; e, q)) , where β is a hyperparameter to be tuned. 5.4 Constraint Detection Similar to (Yih et al., 2015), we adopt an additional constraint detection step based on text matching. Our method can be viewed as entitylinking on a KB sub-graph. It contains two steps: (1) Sub-graph generation: given the top scored query generated by the previous 3 steps5, for each node v (answer node or the CVT node like in Figure 1(b)), we collect all the nodes c connecting to v (with relation rc) with any relation, and generate a sub-graph associated to the original query. (2) Entity-linking on sub-graph nodes: we compute a matching score between each n-gram in the input question (without overlapping the topic entity) and entity name of c (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold ✓(tuned on training set), we will add the constraint entity c (and rc) to the query by attaching it to the corresponding node v on the core-chain. 6 Experiments 6.1 Task Introduction & Settings We use the SimpleQuestions (Bordes et al., 2015) and WebQSP (Yih et al., 2016) datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task. 5Starting with the top-1 query suffers more from error propagation. However we still achieve state-of-the-art on WebQSP in Sec.6, showing the advantage of our relation detection model. We leave in future work beam-search and feature extraction on beam for final answer re-ranking like in previous research. SimpleQuestions (SQ): It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) (Bordes et al., 2015), in order to compare with previous research. Yin et al. (2016) also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results6. Therefore, our results can be compared with their reported results on both tasks. WebQSP (WQ): A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following Yih et al. (2016), we use S-MART (Yang and Chang, 2015) entity-linking outputs.7 In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set.8 For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length 2) connected to the topic entity, and set the corechain labeled in the parse as the positive label and all the others as the negative examples. We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs ({50, 100, 200, 400})9; (2) learning rate ({0.1, 0.5, 1.0, 2.0}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section 4.3); and (4) the number of training epochs. For both the relation detection experiments and the second-step relation detection in KBQA, we have entity replacement first (see Section 5.2 and Figure 1). All word vectors are initialized with 300-d pretrained word embeddings (Mikolov et al., 2013). The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work. 6.2 Relation Detection Results Table 2 shows the results on two relation detection tasks. The AMPCNN result is from (Yin et al., 2016), which yielded state-of-the-art scores by outperforming several attention-based meth6The two resources have been downloaded from https: //github.com/Gorov/SimpleQuestions-EntityLinking 7https://github.com/scottyih/STAGG 8The dataset is available at https://github.com/Gorov/ SimpleQuestions-EntityLinking. 9For CNNs we double the size for fair comparison. 577 Accuracy Model Relation Input Views SimpleQuestions WebQSP AMPCNN (Yin et al., 2016) words 91.3 BiCNN (Yih et al., 2015) char-3-gram 90.0 77.74 BiLSTM w/ words words 91.2 79.32 BiLSTM w/ relation names rel names 88.9 78.96 Hier-Res-BiLSTM (HR-BiLSTM) words + rel names 93.3 82.53 w/o rel name words 91.3 81.69 w/o rel words rel names 88.8 79.68 w/o residual learning (weighted sum on two layers) words + rel names 92.5 80.65 replacing residual with attention (Parikh et al., 2016) words + rel names 92.6 81.38 single-layer BiLSTM question encoder words + rel names 92.8 78.41 replacing BiLSTM with CNN (HR-CNN) words + rel names 92.9 79.08 Table 2: Accuracy on the SimpleQuestions and WebQSP relation detection tasks (test sets). The top shows performance of baselines. On the bottom we give the results of our proposed model together with the ablation tests. ods. We re-implemented the BiCNN model from (Yih et al., 2015), where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p < 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively). Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2% to 88.9%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions. Ablation Test: The bottom of Table 2 shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3% vs. 91.2/88.8%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section 4.3). For the attention-based baseline, we tried the model from (Parikh et al., 2016) and its one-way variations, where the one-way model gives better results10. Note that residual learning significantly helps on WebQSP (80.65% to 10We also tried to apply the same attention method on deep BiLSTM with residual connections, but it does not lead to better results compared to HR-BiLSTM. We hypothesize that the idea of hierarchical matching with attention mechanism may work better for long sequences, and the new advanced attention mechanisms (Wang and Jiang, 2016; Wang et al., 2017) might help hierarchical matching. We leave the above directions to future work. 82.53%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching. Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies. Analysis Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94%) compared to HR-BiLSTM (95.67%), suffering from training difficulty. Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, training deep BiLSTMs is more difficult without shortcut connections. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99%, even lower than the 95.25% achieved by a 578 single-layer BiLSTM. Under our setting the twolayer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty. Finally, we hypothesize that HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM. 6.3 KBQA End-Task Results Table 3 compares our system with two published baselines (1) STAGG (Yih et al., 2015), the stateof-the-art on WebQSP11 and (2) AMPCNN (Yin et al., 2016), the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table 3). Compared to the baseline relation detector (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art. The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the reranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. 11The STAGG score on SQ is from (Bao et al., 2016). Accuracy System SQ WQ STAGG 72.8 63.9 AMPCNN (Yin et al., 2016) 76.4 Baseline: Our Method w/ 75.1 60.0 baseline relation detector Our Method 77.0 63.0 w/o entity re-ranking 74.9 60.6 w/o constraints 58.0 Our Method (multi-detectors) 78.7 63.9 Table 3: KBQA results on SimpleQuestions (SQ) and WebQSP (WQ) test sets. The numbers in green color are directly comparable to our results since we start with the same entity linking results. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in (Yih et al., 2015), constraint detection is crucial for our system12. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5% top-1 accuracy), leaving a huge potential (77.5% vs. 58.0%) for the constraint detection module to improve. Finally, like STAGG, which uses multiple relation detectors (see Yih et al. (2015) for the three models used), we also try to use the top-3 relation detectors from Section 6.2. As shown on the last row of Table 3, this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP. 7 Conclusion KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in (Liang et al., 2016), to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions (Su et al., 2016) and ComplexQuestions (Bao et al., 2016) to handle more characteristics of general QA. 12Note that another reason is that we are evaluating on accuracy here. When evaluating on F1 the gap will be smaller. 579 References Junwei Bao, Nan Duan, Zhao Yan, Ming Zhou, and Tiejun Zhao. 2016. Constraint-based question answering with knowledge graph. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 2503–2514. Hannah Bast and Elmar Haussmann. 2015. More accurate question answering on freebase. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. ACM, pages 1431–1440. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Seattle, Washington, USA, pages 1533– 1544. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 . Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems. pages 2787–2795. Zihang Dai, Lei Li, and Wei Xu. 2016. Cfo: Conditional focused neural question answering with largescale knowledge bases. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 800–810. Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 626–634. Anthony Fader, Luke S Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In ACL (1). Citeseer, pages 1608–1618. David Golub and Xiaodong He. 2016. Character-level question answering with attention. arXiv preprint arXiv:1604.00727 . Matthew R. Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1774– 1784. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 770–778. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. 2016. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. arXiv preprint arXiv:1611.00020 . Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Thien Huu Nguyen and Ralph Grishman. 2014. Employing word representations and regularization for domain adaptation of relation extraction. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 68–74. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 2249–2255. Bryan Rink and Sanda Harabagiu. 2010. Utd: Classifying semantic relations by combining lexical and semantic resources. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, Uppsala, Sweden, pages 256–259. Yu Su, Huan Sun, Brian Sadler, Mudhakar Srivatsa, Izzeddin Gur, Zenghui Yan, and Xifeng Yan. 2016. On generating characteristic-rich question sets for qa evaluation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 562–572. https://aclweb.org/anthology/D16-1054. Ang Sun, Ralph Grishman, and Satoshi Sekine. 2011. Semi-supervised relation extraction with large-scale word clustering. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 521–529. Ngoc Thang Vu, Heike Adel, Pankaj Gupta, and Hinrich Sch¨utze. 2016. Combining recurrent and convolutional neural networks for relation classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 534–539. 580 Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention cnns. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1298–1307. Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with lstm. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 1442–1451. http://www.aclweb.org/anthology/N16-1170. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. arXiv preprint arXiv:1702.03814 . Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016. Question answering on freebase via relation extraction and textual evidence. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2326–2336. Yi Yang and Ming-Wei Chang. 2015. S-mart: Novel tree-based structured learning algorithms applied to tweet entity linking. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 504–513. Xuchen Yao, Jonathan Berant, and Benjamin Van Durme. 2014. Freebase qa: Information extraction or semantic parsing? ACL 2014 page 82. Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In ACL (1). Citeseer, pages 956–966. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Association for Computational Linguistics (ACL). Wen-tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation question answering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 643–648. Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 201–206. Wenpeng Yin, Mo Yu, Bing Xiang, Bowen Zhou, and Hinrich Sch¨utze. 2016. Simple question answering by attentive convolutional neural network. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 1746–1756. Mo Yu, Mark Dredze, Raman Arora, and Matthew R. Gormley. 2016. Embedding lexical features via lowrank tensors. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 1019–1029. http://www.aclweb.org/anthology/N16-1117. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics, Dublin, Ireland, pages 2335– 2344. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Association for Computational Linguistics. pages 427–434. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attentionbased bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 207–212. 581
2017
53
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 582–592 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1054 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 582–592 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1054 Deep Keyphrase Generation Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He∗, Peter Brusilovsky, Yu Chi School of Computing and Information University of Pittsburgh Pittsburgh, PA, 15213 {rui.meng, saz31, shh69, daqing, peterb, yuc73}@pitt.edu Abstract Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase. 1 Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text. The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper. We use ∗Corresponding author the term “keyphrase” interchangeably with “keyword” in the rest of this paper, as both terms have an implication that they may contain multiple words. High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content. As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; Witten et al., 1999). Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms. Therefore, this study also focuses on extracting keyphrases from scientific publications. Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999), text summarization (Zhang et al., 2004), text categorization (Hulth and Megyesi, 2006), and opinion mining (Berend, 2011). Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003). The first step is to acquire a list of keyphrase candidates. Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; Wang et al., 2016). The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features (Frank et al., 1999; Liu et al., 2009, 2010; Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; Witten et al., 1999). There are two major drawbacks in the above keyphrase extraction approaches. First, these methods can only extract the keyphrases that ap582 pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms. However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication. In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases. Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets. The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model. Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank. However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content. Table 1: Proportion of the present keyphrases and absent keyphrases in four public datasets Dataset # Keyphrase % Present % Absent Inspec 19,275 55.69 44.31 Krapivin 2,461 44.74 52.26 NUS 2,834 67.75 32.25 SemEval 12,296 42.01 57.99 To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases. Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases. Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text. For example, when human annotators see “Latent Dirichlet Allocation” in the text, they might write down “topic modeling” and/or “text mining” as possible keyphrases. In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features. For example, the phrases following “we propose/apply/use” could be important in the text. As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features. To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) (Cho et al., 2014; Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding). Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information. Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information. The contribution of this paper is three-fold. First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur. Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases. Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods. In the remainder of this paper, we first review the related work in Section 2. Then, we elaborate upon the proposed model in Section 3. After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6. Section 7 concludes the paper. 2 Related Work 2.1 Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document. A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps. The first step is to generate a list of phrase can583 didates with heuristic methods. As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept. The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Wang et al., 2016; Le et al., 2016), and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008). The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document. The top-ranked candidates are returned as keyphrases. Both supervised and unsupervised machine learning methods are widely employed here. For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored (Frank et al., 1999; Witten et al., 1999; Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014). As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009), detecting representative phrases from topical clusters (Liu et al., 2009, 2010), and so on. Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways. Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases. Liu et al. (2011) share the most similar ideas to our work. They used a word alignment model, which learns a translation from the documents to the keyphrases. This approach alleviates the problem of vocabulary gaps between source and target to a certain degree. However, this translation model is unable to handle semantic meaning. Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases. Zhang et al. (2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction. However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases. 2.2 Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach. It was first introduced by Cho et al. (2014) and Sutskever et al. (2014) to solve translation problems. As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016). Different strategies have been explored to improve the performance of the Encoder-Decoder model. The attention mechanism (Bahdanau et al., 2014) is a soft alignment approach that allows the model to automatically locate the relevant input components. In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016). A discrepancy exists between the optimizing objective during training and the metrics during evaluation. A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc’Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016). 3 Methodology This section will introduce our proposed deep keyphrase generation method in detail. First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model. Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4. 3.1 Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x(i), p(i)) contains one source text x(i), and Mi target keyphrases p(i) = (p(i,1), p(i,2), . . . , p(i,Mi)). Both the source text x(i) and keyphrase p(i,j) are sequences of words: x(i) = x(i) 1 , x(i) 2 , . . . , x(i) Lxi p(i,j) = y(i,j) 1 , y(i,j) 2 , . . . , y(i,j) Lp(i,j) Lx(i) and Lp(i,j)denotes the length of word sequence of x(i) and p(i,j) respectively. 584 Each data sample contains one source text sequence and multiple target phrase sequences. To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence. We adopt a simple way, which splits the data sample (x(i), p(i)) into Mi pairs: (x(i), p(i,1)), (x(i), p(i,2)), . . . , (x(i), p(i,Mi)). Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence. For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase. 3.2 Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation . Both the encoder and decoder are implemented with recurrent neural networks (RNN). The encoder RNN converts the variable-length input sequence x = (x1, x2, ..., xT ) into a set of hidden representation h = (h1, h2, . . . , hT ), by iterating the following equations along time t: ht = f (xt, ht−1) (1) where f is a non-linear function. We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h1, h2, ..., hT ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y1, y2, ..., yT ′) word by word, through a conditional language model: st = f(yt−1, st−1, c) p(yt|y1,...,t−1, x) = g(yt−1, st, c) (3) where st is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary. yt is the predicted word at time t, by taking the word with largest probability after g(·). The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence. After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities. 3.3 Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network. Previous studies (Bahdanau et al., 2014; Cho et al., 2014) indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997). As a result, the above non-linear function f is replaced by the GRU function (see in (Cho et al., 2014)). Another forward GRU is used as the decoder. In addition, an attention mechanism is adopted to improve performance. The attention mechanism was firstly introduced by Bahdanau et al. (2014) to make the model dynamically focus on the important parts in input. The context vector c is computed as a weighted sum of hidden representation h = (h1, . . . , hT ): ci = T X j=1 αijhj αij = exp(a(si−1, hj)) PT k=1 exp(a(si−1, hk)) (4) where a(si−1, hj) is a soft alignment function that measures the similarity between si−1 and hj; namely, to which degree the inputs around position j and the output at position i match. 3.4 Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g. 30,000 words in (Cho et al., 2014)), but a large amount of long-tail words are simply ignored. Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words. Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known. The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text. 585 By incorporating the copying mechanism, the probability of predicting each new word yt consists of two parts. The first term is the probability of generating the term (see Equation 3) and the second one is the probability of copying it from the source text: p(yt|y1,...,t−1, x) = pg(yt|y1,...,t−1, x) + pc(yt|y1,...,t−1, x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention. But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part pc(yt|y1,...,t−1, x) only considers the words in source text. Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text. pc(yt|y1,...,t−1, x) = 1 Z X j:xj=yt exp(ψc(xj)), y ∈χ ψc(xj) = σ(hT j Wc)st (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and Wc ∈R is a learned parameter matrix. Z is the sum of all the scores and is used for normalization. Please see (Gu et al., 2016) for more details. 4 Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets. Then, we introduce our evaluation metrics and baselines. 4.1 Training Dataset There are several publicly-available datasets for evaluating keyphrase generation. The largest one came from Krapivin et al. (2008), which contains 2,304 scientific publications. However, this amount of data is unable to train a robust recurrent neural network model. In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors. Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, ScienceDirect, Wiley, and Web of Science etc. (Han et al., 2013; Rui et al., 2016). In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al. (2008). Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k. Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines. 4.2 Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used. In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles. We take the title and abstract as the source text. Each dataset is described in detail below. – Inspec (Hulth, 2003): This dataset provides 2,000 paper abstracts. We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models. – Krapivin (Krapivin et al., 2008): This dataset provides 2,304 papers with full-text and author-assigned keyphrases. However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines. – NUS (Nguyen and Kan, 2007): We use the author-assigned keyphrases and treat all 211 papers as the testing data. Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation. – SemEval-2010 (Kim et al., 2010): 288 articles were collected from the ACM Digital 586 Library. 100 articles were used for testing and the rest were used for training supervised baselines. – KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science. They were randomly selected from our obtained 567,830 articles. Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set. Thus we take the 20,000 articles in the validation set to train the supervised baselines. It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed. 4.3 Implementation Details In total, there are 2,780,316 ⟨text, keyphrase⟩pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword. The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol ⟨digit⟩are applied. Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (CopyRNN). For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1]. Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10−4, gradient clipping = 0.1 and dropout rate = 0.5. The max depth of beam search is set to 6, and the beam size is set to 200. The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations). In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words. To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest. 4.4 Baseline Models Four unsupervised algorithms (Tf-Idf, TextRank (Mihalcea and Tarau, 2004), SingleRank (Wan and Xiao, 2008), and ExpandRank (Wan and Xiao, 2008)) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a)) are adopted as baselines. We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010), and the two supervised methods following the default setting as specified in their papers. 4.5 Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F1) are employed for measuring the algorithm’s performance. Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records. Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing. 5 Results and Analysis We conduct an empirical study on three different tasks to evaluate our model. 5.1 Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task. To make a fair comparison, we only consider the present keyphrases for evaluation in this task. Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN). For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets. The best scores are highlighted in bold and the underlines indicate the second best performances. The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and ExpandRank) have a robust performance across different datasets. The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity. The measures on NUS and SemEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010), probably because we utilized the paper abstract instead of the full text for training, which may 587 Method Inspec Krapivin NUS SemEval KP20k F1@5 F1@10 F1@5 F1@10 F1@5 F1@10 F1@5 F1@10 F1@5 F1@10 Tf-Idf 0.221 0.313 0.129 0.160 0.136 0.184 0.128 0.194 0.102 0.126 TextRank 0.223 0.281 0.189 0.162 0.195 0.196 0.176 0.187 0.175 0.147 SingleRank 0.214 0.306 0.189 0.162 0.140 0.173 0.135 0.176 0.096 0.119 ExpandRank 0.210 0.304 0.081 0.126 0.132 0.164 0.139 0.170 N/A N/A Maui 0.040 0.042 0.249 0.216 0.249 0.268 0.044 0.039 0.270 0.230 KEA 0.098 0.126 0.110 0.152 0.069 0.084 0.025 0.026 0.171 0.154 RNN 0.085 0.064 0.135 0.088 0.169 0.127 0.157 0.124 0.179 0.189 CopyRNN 0.278 0.342 0.311 0.266 0.334 0.326 0.293 0.304 0.333 0.262 Table 2: The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information. The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models. As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected. It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text. In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary. This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text. The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average. This result demonstrates the importance of source text to the extraction task. Besides, nearly 2% of all correct predictions contained outof-vocabulary words. The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and CopyRNN for an article about video search. We see that both models can generate phrases that relate to the topic of information retrieval and video. However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases. CopyRNN, on the other hand, predicts more detailed phrases like “video metadata” and “integrated ranking”. An interesting bad case, “rich content” coordinates with a keyphrase “video metadata”, and the CopyRNN mistakenly puts it into prediction. 5.2 Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model’s capability for predicting absent keyphrases based on the “understanding” of content. It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task. Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task. Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted. We use the absent keyphrases in the testing datasets for evaluation. Dataset RNN CopyRNN R@10 R@50 R@10 R@50 Inspec 0.031 0.061 0.047 0.100 Krapivin 0.095 0.156 0.113 0.202 NUS 0.050 0.089 0.058 0.116 SemEval 0.041 0.060 0.043 0.067 KP20k 0.083 0.144 0.125 0.211 Table 3: Absent keyphrases prediction performance of RNN and CopyRNN on five datasets Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% 588 (15%) of keyphrases at top 10 (50) predictions. This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions. In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task. An example is shown in Figure 1(b), in which we see that two absent keyphrases, “video retrieval” and “video indexing”, are correctly recalled by both models. Note that the term “indexing” does not appear in the text, but the models may detect the information “index videos” in the first sentence and paraphrase it to the target phrase. And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments). Model F1 Model F1 Tf-Idf 0.270 ExpandRank 0.269 TextRank 0.097 KeyCluster 0.140 SingleRank 0.256 CopyRNN 0.164 Table 4: Keyphrase prediction performance of CopyRNN on DUC-2001. The model is trained on scientific publication and evaluated on news. 5.3 Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style. However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora. Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment. We use the popular news article dataset DUC2001 (Wan and Xiao, 2008) for analysis. The dataset consists of 308 news articles and 2,488 manually annotated keyphrases. The result of this analysis is shown in Table 4, from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text. Compared to the results reported in (Hasan and Ng, 2010), the performance of CopyRNN is better than TextRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009), but lags behind the other three baselines. As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text. In this experiment, the CopyRNN recalls 766 keyphrases. 14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted. 6 Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text. In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases. Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts. We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus. Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training. We believe that with proper training on news data, the model would make further improvement. Additionally, this work mainly studies the problem of discovering core content from textual materials. Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos. 7 Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text. To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task. Our model summarizes phrases based the deep semantic meaning 589 Figure 1: An example of predicted keyphrase by RNN and CopyRNN. Phrases shown in bold are correct predictions. of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism. Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text. Our future work may include the following two directions. – In this work, we only evaluated the performance of the proposed model by conducting off-line experiments. In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases. – Our current model does not fully consider correlation among target keyphrases. It would also be interesting to explore the multiple-output optimization aspects of our model. Acknowledgments We would like to thank Jiatao Gu and Miltiadis Allamanis for sharing the source code and giving helpful advice. We also thank Wei Lu, Yong Huang, Qikai Cheng and other IRLAB members at Wuhan University for the assistance of dataset development. This work is partially supported by the National Science Foundation under Grant No.1525186. References M. Allamanis, H. Peng, and C. Sutton. 2016. A Convolutional Attention Network for Extreme Summarization of Source Code. ArXiv e-prints . Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . G´abor Berend. 2011. Opinion expression mining by exploiting keyphrase extraction. In IJCNLP. Citeseer, pages 1162–1170. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Eibe Frank, Gordon W Paynter, Ian H Witten, Carl Gutwin, and Craig G Nevill-Manning. 1999. Domain-specific keyphrase extraction . Felix A Gers and E Schmidhuber. 2001. Lstm recurrent networks learn simple context-free and contextsensitive languages. IEEE Transactions on Neural Networks 12(6):1333–1340. Sujatha Das Gollapalli and Cornelia Caragea. 2014. Extracting keyphrases from research papers using citation networks. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI’14, pages 1629–1635. http://dl.acm.org/citation.cfm?id=2892753.2892779. Maria Grineva, Maxim Grinev, and Dmitry Lizorkin. 2009. Extracting key terms from noisy and multitheme documents. In Proceedings of the 18th International Conference on World Wide Web. ACM, New York, NY, USA, WWW ’09, pages 661–670. https://doi.org/10.1145/1526709.1526798. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393 . 590 Shuguang Han, Daqing He, Jiepu Jiang, and Zhen Yue. 2013. Supporting exploratory people search: a study of factor transparency and user control. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management. ACM, pages 449–458. Kazi Saidul Hasan and Vincent Ng. 2010. Conundrums in unsupervised keyphrase extraction: making sense of the state-of-the-art. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters. Association for Computational Linguistics, pages 365–373. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Anette Hulth. 2003. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the 2003 conference on Empirical methods in natural language processing. Association for Computational Linguistics, pages 216–223. Anette Hulth and Be´ata B Megyesi. 2006. A study on automatically extracted keywords in text categorization. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 537–544. Steve Jones and Mark S Staveley. 1999. Phrasier: a system for interactive document retrieval using keyphrases. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 160–167. Daniel Kelleher and Saturnino Luz. 2005. Automatic hypertext keyphrase detection. In Proceedings of the 19th International Joint Conference on Artificial Intelligence. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, IJCAI’05, pages 1608–1609. http://dl.acm.org/citation.cfm?id=1642293.1642576. Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. Semeval-2010 task 5: Automatic keyphrase extraction from scientific articles. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, pages 21–26. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Mikalai Krapivin, Aliaksandr Autayeu, and Maurizio Marchese. 2008. Large dataset for keyphrases extraction. Technical Report DISI-09-055, DISI, Trento, Italy. Tho Thi Ngoc Le, Minh Le Nguyen, and Akira Shimazu. 2016. Unsupervised Keyphrase Extraction: Introducing New Kinds of Words to Keyphrases, Springer International Publishing, Cham, pages 665–671. Zhiyuan Liu, Xinxiong Chen, Yabin Zheng, and Maosong Sun. 2011. Automatic keyphrase extraction by bridging vocabulary gap. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, pages 135–144. Zhiyuan Liu, Wenyi Huang, Yabin Zheng, and Maosong Sun. 2010. Automatic keyphrase extraction via topic decomposition. In Proceedings of the 2010 conference on empirical methods in natural language processing. Association for Computational Linguistics, pages 366–376. Zhiyuan Liu, Peng Li, Yabin Zheng, and Maosong Sun. 2009. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1. Association for Computational Linguistics, pages 257–266. Patrice Lopez and Laurent Romary. 2010. Humb: Automatic key term extraction from scientific articles in grobid. In Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics, Stroudsburg, PA, USA, SemEval ’10, pages 248–251. http://dl.acm.org/citation.cfm?id=1859664.1859719. Sumit Chopra Marc’Aurelio Ranzato, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. ICLR, San Juan, Puerto Rico . Yutaka Matsuo and Mitsuru Ishizuka. 2004. Keyword extraction from a single document using word co-occurrence statistical information. International Journal on Artificial Intelligence Tools 13(01):157– 169. Olena Medelyan, Eibe Frank, and Ian H Witten. 2009a. Human-competitive tagging using automatic keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3-Volume 3. Association for Computational Linguistics, pages 1318–1327. Olena Medelyan, Eibe Frank, and Ian H. Witten. 2009b. Human-competitive tagging using automatic keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3. Association for Computational Linguistics, Stroudsburg, PA, USA, EMNLP ’09, pages 1318–1327. http://dl.acm.org/citation.cfm?id=1699648.1699678. Olena Medelyan, Ian H Witten, and David Milne. 2008. Topic indexing with wikipedia. In Proceedings of the AAAI WikiAI workshop. volume 1, pages 19–24. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. Association for Computational Linguistics. 591 Thuy Dung Nguyen and Min-Yen Kan. 2007. Keyphrase extraction in scientific publications. In International Conference on Asian Digital Libraries. Springer, pages 317–326. Meng Rui, Han Shuguang, Huang Yun, He Daqing, and Brusilovsky Peter. 2016. Knowledge-based content linking for online textbooks. In 2016 IEEE/WIC/ACM International Conference on Web Intelligence. The Institute of Electrical and Electronics Engineers, pages 18–25. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015. pages 379–389. http://aclweb.org/anthology/D/D15/D15-1044.pdf. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-16). Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1683–1692. http://www.aclweb.org/anthology/P16-1159. Min Song, Il-Yeol Song, and Xiaohua Hu. 2003. Kpspotter: a flexible information gain-based keyphrase extraction system. In Proceedings of the 5th ACM international workshop on Web information and data management. ACM, pages 50–53. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Takashi Tomokiyo and Matthew Hurst. 2003. A language model approach to keyphrase extraction. In Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment - Volume 18. Association for Computational Linguistics, Stroudsburg, PA, USA, MWE ’03, pages 33– 40. https://doi.org/10.3115/1119282.1119287. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems. pages 2773–2781. Xiaojun Wan and Jianguo Xiao. 2008. Single document keyphrase extraction using neighborhood knowledge. Minmei Wang, Bo Zhao, and Yihua Huang. 2016. PTR: Phrase-Based Topical Ranking for Automatic Keyphrase Extraction in Scientific Publications, Springer International Publishing, Cham, pages 120–128. Ian H Witten, Gordon W Paynter, Eibe Frank, Carl Gutwin, and Craig G Nevill-Manning. 1999. Kea: Practical automatic keyphrase extraction. In Proceedings of the fourth ACM conference on Digital libraries. ACM, pages 254–255. Wenyuan Zeng, Wenjie Luo, Sanja Fidler, and Raquel Urtasun. 2016. Efficient summarization with read-again and copy mechanism. arXiv preprint arXiv:1611.03382 . Qi Zhang, Yang Wang, Yeyun Gong, and Xuanjing Huang. 2016. Keyphrase extraction using deep recurrent neural networks on twitter. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 836–845. https://aclweb.org/anthology/D16-1080. Yongzheng Zhang, Nur Zincir-Heywood, and Evangelos Milios. 2004. World wide web site summarization. Web Intelligence and Agent Systems: An International Journal 2(1):39–53. 592
2017
54
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 593–602 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1055 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 593–602 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1055 Attention-over-Attention Neural Networks for Reading Comprehension Yiming Cui†, Zhipeng Chen†, Si Wei†, Shijin Wang†, Ting Liu‡ and Guoping Hu† †Joint Laboratory of HIT and iFLYTEK, iFLYTEK Research, Beijing, China ‡Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, Harbin, China †{ymcui,zpchen,siwei,sjwang3,gphu}@iflytek.com ‡[email protected] Abstract Cloze-style reading comprehension is a representative problem in mining relationship between document and query. In this paper, we present a simple but novel model called attention-over-attention reader for better solving cloze-style reading comprehension task. The proposed model aims to place another attention mechanism over the document-level attention and induces “attended attention” for final answer predictions. One advantage of our model is that it is simpler than related works while giving excellent performance. In addition to the primary model, we also propose an N-best re-ranking strategy to double check the validity of the candidates and further improve the performance. Experimental results show that the proposed methods significantly outperform various state-ofthe-art systems by a large margin in public datasets, such as CNN and Children’s Book Test. 1 Introduction To read and comprehend the human languages are challenging tasks for the machines, which requires that the understanding of natural languages and the ability to do reasoning over various clues. Reading comprehension is a general problem in the real world, which aims to read and comprehend a given article or context, and answer the questions based on it. Recently, the cloze-style reading comprehension problem has become a popular task in the community. The cloze-style query (Taylor, 1953) is a problem that to fill in an appropriate word in the given sentences while taking the context information into account. To teach the machine to do cloze-style reading comprehensions, large-scale training data is necessary for learning relationships between the given document and query. To create large-scale training data for neural networks, Hermann et al. (2015) released the CNN/Daily Mail news dataset, where the document is formed by the news articles and the queries are extracted from the summary of the news. Hill et al. (2015) released the Children’s Book Test dataset afterwards, where the training samples are generated from consecutive 20 sentences from books, and the query is formed by 21st sentence. Following these datasets, a vast variety of neural network approaches have been proposed (Kadlec et al., 2016; Cui et al., 2016; Chen et al., 2016; Dhingra et al., 2016; Sordoni et al., 2016; Trischler et al., 2016; Seo et al., 2016; Xiong et al., 2016), and most of them stem from the attention-based neural network (Bahdanau et al., 2014), which has become a stereotype in most of the NLP tasks and is well-known by its capability of learning the “importance” distribution over the inputs. In this paper, we present a novel neural network architecture, called attention-over-attention model. As we can understand the meaning literally, our model aims to place another attention mechanism over the existing document-level attention. Unlike the previous works, that are using heuristic merging functions (Cui et al., 2016), or setting various pre-defined non-trainable terms (Trischler et al., 2016), our model could automatically generate an “attended attention” over various document-level attentions, and make a mutual look not only from query-to-document but also document-to-query, which will benefit from the interactive information. To sum up, the main contributions of our work are listed as follows. • To our knowledge, this is the first time that 593 the mechanism of nesting another attention over the existing attentions is proposed, i.e. attention-over-attention mechanism. • Unlike the previous works on introducing complex architectures or many non-trainable hyper-parameters to the model, our model is much more simple but outperforms various state-of-the-art systems by a large margin. • We also propose an N-best re-ranking strategy to re-score the candidates in various aspects and further improve the performance. The following of the paper will be organized as follows. In Section 2, we will give a brief introduction to the cloze-style reading comprehension task as well as related public datasets. Then the proposed attention-over-attention reader will be presented in detail in Section 3 and N-best reranking strategy in Section 4. The experimental results and analysis will be given in Section 5 and Section 6. Related work will be discussed in Section 7. Finally, we will give a conclusion of this paper and envisions on future work. 2 Cloze-style Reading Comprehension In this section, we will give a brief introduction to the cloze-style reading comprehension task at the beginning. And then, several existing public datasets will be described in detail. 2.1 Task Description Formally, a general Cloze-style reading comprehension problem can be illustrated as a triple: ⟨D, Q, A⟩ The triple consists of a document D, a query Q and the answer to the query A. Note that the answer is usually a single word in the document, which requires the human to exploit context information in both document and query. The type of the answer word varies from predicting a preposition given a fixed collocation to identifying a named entity from a factual illustration. 2.2 Existing Public Datasets Large-scale training data is essential for training neural networks. Several public datasets for the cloze-style reading comprehension has been released. Here, we introduce two representative and widely-used datasets. • CNN / Daily Mail Hermann et al. (2015) have firstly published two datasets: CNN and Daily Mail news data 1. They construct these datasets with web-crawled CNN and Daily Mail news data. One of the characteristics of these datasets is that the news article is often associated with a summary. So they first regard the main body of the news article as the Document, and the Query is formed by the summary of the article, where one entity word is replaced by a special placeholder to indicate the missing word. The replaced entity word will be the Answer of the Query. Apart from releasing the dataset, they also proposed a methodology that anonymizes the named entity tokens in the data, and these tokens are also re-shuffle in each sample. The motivation is that the news articles are containing limited named entities, which are usually celebrities, and the world knowledge can be learned from the dataset. So this methodology aims to exploit general relationships between anonymized named entities within a single document rather than the common knowledge. The following research on these datasets showed that the entity word anonymization is not as effective as expected (Chen et al., 2016). • Children’s Book Test There was also a dataset called the Children’s Book Test (CBTest) released by Hill et al. (2015), which is built on the children’s book story through Project Gutenberg 2. Different from the CNN/Daily Mail datasets, there is no summary available in the children’s book. So they proposed another way to extract query from the original data. The document is composed of 20 consecutive sentences in the story, and the 21st sentence is regarded as the query, where one word is blanked with a special placeholder. In the CBTest datasets, there are four types of sub-datasets available which are classified by the part-of-speech and named entity tag of the answer word, containing Named Entities (NE), Common Nouns (CN), Verbs and Prepositions. In their studies, they have found that the answering of verbs and prepositions are relatively less dependent on the content of document, and the humans can even do preposi1The pre-processed CNN and Daily Mail datasets are available at http://cs.nyu.edu/˜kcho/DMQA/ 2The CBTest datasets are available at http: //www.thespermwhale.com/jaseweston/babi/ CBTest.tgz 594 tion blank-filling without the presence of the document. The studies shown by Hill et al. (2015), answering verbs and prepositions are less dependent with the presence of document. Thus, most of the related works are focusing on solving NE and CN types. 3 Attention-over-Attention Reader In this section, we will give a detailed introduction to the proposed Attention-over-Attention Reader (AoA Reader). Our model is primarily motivated by Kadlec et al., (2016), which aims to directly estimate the answer from the document-level attention instead of calculating blended representations of the document. As previous studies by Cui et al. (2016) showed that the further investigation of query representation is necessary, and it should be paid more attention to utilizing the information of query. In this paper, we propose a novel work that placing another attention over the primary attentions, to indicate the “importance” of each attentions. Now, we will give a formal description of our proposed model. When a cloze-style training triple ⟨D, Q, A⟩is given, the proposed model will be constructed in the following steps. • Contextual Embedding We first transform every word in the document D and query Q into one-hot representations and then convert them into continuous representations with a shared embedding matrix We. By sharing word embedding, both the document and query can participate in the learning of embedding and both of them will benefit from this mechanism. After that, we use two bi-directional RNNs to get contextual representations of the document and query individually, where the representation of each word is formed by concatenating the forward and backward hidden states. After making a trade-off between model performance and training complexity, we choose the Gated Recurrent Unit (GRU) (Cho et al., 2014) as recurrent unit implementation. e(x) = We · x, where x ∈D, Q (1) −−−→ hs(x) = −−−→ GRU(e(x)) (2) ←−−− hs(x) = ←−−− GRU(e(x)) (3) hs(x) = [−−−→ hs(x); ←−−− hs(x)] (4) We take hdoc ∈R|D|∗2d and hquery ∈R|Q|∗2d to denote the contextual representations of document and query, where d is the dimension of GRU (oneway). • Pair-wise Matching Score After obtaining the contextual embeddings of the document hdoc and query hquery, we calculate a pair-wise matching matrix, which indicates the pair-wise matching degree of one document word and one query word. Formally, when given ith word of the document and jth word of query, we can compute a matching score by their dot product. M(i, j) = hdoc(i)T · hquery(j) (5) In this way, we can calculate every pair-wise matching score between each document and query word, forming a matrix M ∈R|D|∗|Q|, where the value of ith row and jth column is filled by M(i, j). • Individual Attentions After getting the pair-wise matching matrix M, we apply a column-wise softmax function to get probability distributions in each column, where each column is an individual document-level attention when considering a single query word. We denote α(t) ∈R|D| as the document-level attention regarding query word at time t, which can be seen as a query-to-document attention. α(t) = softmax(M(1, t), ..., M(|D|, t)) (6) α = [α(1), α(2), ..., α(|Q|)] (7) • Attention-over-Attention Different from Cui et al. (2016), instead of using naive heuristics (such as summing or averaging) to combine these individual attentions into a final attention, we introduce another attention mechanism to automatically decide the importance of each individual attention. First, we calculate a reversed attention, that is, for every document word at time t, we calculate the “importance” distribution on the query, to indicate which query words are more important given a single document word. We apply a row-wise softmax function to the pair-wise matching matrix M to get query-level attentions. We denote β(t) ∈R|Q| as the query-level attention regarding document word at time t, which can be seen as a 595    Document Query          !(“$%&'”|*,,) = / 01 = 02 + 04 5 1∈7(“89:;”,<) Mary sits beside him ... he loves Mary he loves X dot product Column-wise softmax Row-wise softmax Column-wise Average dot product Embedding Layer bi-GRU Layer Individual ATT Layer ATT-over-ATT Layer Sum ATT Layer Figure 1: Neural network architecture of the proposed Attention-over-Attention Reader (AoA Reader). document-to-query attention. β(t) = softmax(M(t, 1), ..., M(t, |Q|)) (8) So far, we have obtained both query-todocument attention α and document-to-query attention β. Our motivation is to exploit mutual information between the document and query. However, most of the previous works are only relying on query-to-document attention, that is, only calculate one document-level attention when considering the whole query. Then we average all the β(t) to get an averaged query-level attention β. Note that, we do not apply another softmax to the β, because averaging individual attentions do not break the normalizing condition. β = 1 n |D| X t=1 β(t) (9) Finally, we calculate dot product of α and β to get the “attended document-level attention” s ∈ R|D|, i.e. the attention-over-attention mechanism. Intuitively, this operation is calculating a weighted sum of each individual document-level attention α(t) when looking at query word at time t. In this way, the contributions by each query word can be learned explicitly, and the final decision (document-level attention) is made through the voted result by the importance of each query word. s = αT β (10) • Final Predictions Following Kadlec et al. (2016), we use sum attention mechanism to get aggregated results. Note that the final output should be reflected in the vocabulary space V , rather than document-level attention |D|, which will make a significant difference in the performance, though Kadlec et al. (2016) did not illustrate this clearly. P(w|D, Q) = X i∈I(w,D) si, w ∈V (11) where I(w, D) indicate the positions that word w appears in the document D. As the training objectives, we seek to maximize the log-likelihood of the correct answer. L = X i log(p(x)) , x ∈A (12) 596 CNN News CBT NE CBT CN Train Valid Test Train Valid Test Train Valid Test # Query 380,298 3,924 3,198 108,719 2,000 2,500 120,769 2,000 2,500 Max # candidates 527 187 396 10 10 10 10 10 10 Avg # candidates 26 26 25 10 10 10 10 10 10 Avg # tokens 762 763 716 433 412 424 470 448 461 Vocabulary 118,497 53,063 53,185 Table 1: Statistics of cloze-style reading comprehension datasets: CNN news and CBTest NE / CN. The proposed neural network architecture is depicted in Figure 1. Note that, as our model mainly adds limited steps of calculations to the AS Reader (Kadlec et al., 2016) and does not employ any additional weights, the computational complexity is similar to the AS Reader. 4 N-best Re-ranking Strategy Intuitively, when we do cloze-style reading comprehensions, we often refill the candidate into the blank of the query to double-check its appropriateness, fluency and grammar to see if the candidate we choose is the most suitable one. If we do find some problems in the candidate we choose, we will choose the second possible candidate and do some checking again. To mimic the process of double-checking, we propose to use N-best re-ranking strategy after generating answers from our neural networks. The procedure can be illustrated as follows. • N-best Decoding Instead of only picking the candidate that has the highest possibility as answer, we can also extract follow-up candidates in the decoding process, which forms an N-best list. • Refill Candidate into Query As a characteristic of the cloze-style problem, each candidate can be refilled into the blank of the query to form a complete sentence. This allows us to check the candidate according to its context. • Feature Scoring The candidate sentences can be scored in many aspects. In this paper, we exploit three features to score the N-best list. • Global N-gram LM: This is a fundamental metric in scoring sentence, which aims to evaluate its fluency. This model is trained on the document part of training data. • Local N-gram LM: Different from global LM, the local LM aims to explore the information with the given document, so the statistics are obtained from the test-time document. It should be noted that the local LM is trained sample-by-sample, it is not trained on the entire test set, which is not legal in the real test case. This model is useful when there are many unknown words in the test sample. • Word-class LM: Similar to global LM, the word-class LM is also trained on the document part of training data, but the words are converted to its word class ID. The word class can be obtained by using clustering methods. In this paper, we simply utilized the mkcls tool for generating 1000 word classes (Josef Och, 1999). • Weight Tuning To tune the weights among these features, we adopt the K-best MIRA algorithm (Cherry and Foster, 2012) to automatically optimize the weights on the validation set, which is widely used in statistical machine translation tuning procedure. • Re-scoring and Re-ranking After getting the weights of each feature, we calculate the weighted sum of each feature in the Nbest sentences and then choose the candidate that has the lowest cost as the final answer. 5 Experiments 5.1 Experimental Setups The general settings of our neural network model are listed below in detail. • Embedding Layer: The embedding weights are randomly initialized with the uniformed distribution in the interval [−0.05, 0.05]. 597 CNN News CBTest NE CBTest CN Valid Test Valid Test Valid Test Deep LSTM Reader (Hermann et al., 2015) 55.0 57.0 Attentive Reader (Hermann et al., 2015) 61.6 63.0 Human (context+query) (Hill et al., 2015) 81.6 81.6 MemNN (window + self-sup.) (Hill et al., 2015) 63.4 66.8 70.4 66.6 64.2 63.0 AS Reader (Kadlec et al., 2016) 68.6 69.5 73.8 68.6 68.8 63.4 CAS Reader (Cui et al., 2016) 68.2 70.0 74.2 69.2 68.2 65.7 Stanford AR (Chen et al., 2016) 72.4 72.4 GA Reader (Dhingra et al., 2016) 73.0 73.8 74.9 69.0 69.0 63.9 Iterative Attention (Sordoni et al., 2016) 72.6 73.3 75.2 68.6 72.1 69.2 EpiReader (Trischler et al., 2016) 73.4 74.0 75.3 69.7 71.5 67.4 AoA Reader 73.1 74.4 77.8 72.0 72.2 69.4 AoA Reader + Reranking 79.6 74.0 75.7 73.1 MemNN (Ensemble) 66.2 69.4 AS Reader (Ensemble) 73.9 75.4 74.5 70.6 71.1 68.9 GA Reader (Ensemble) 76.4 77.4 75.5 71.9 72.1 69.4 EpiReader (Ensemble) 76.6 71.8 73.6 70.6 Iterative Attention (Ensemble) 74.5 75.7 76.9 72.0 74.1 71.0 AoA Reader (Ensemble) 78.9 74.5 74.7 70.8 AoA Reader (Ensemble + Reranking) 80.3 75.6 77.0 74.1 Table 2: Results on the CNN news, CBTest NE and CN datasets. The best baseline results are depicted in italics, and the overall best results are in bold face. For regularization purpose, we adopted l2regularization to 0.0001 and dropout rate of 0.1 (Srivastava et al., 2014). Also, it should be noted that we do not exploit any pretrained embedding models. • Hidden Layer: Internal weights of GRUs are initialized with random orthogonal matrices (Saxe et al., 2013). • Optimization: We adopted ADAM optimizer for weight updating (Kingma and Ba, 2014), with an initial learning rate of 0.001. As the GRU units still suffer from the gradient exploding issues, we set the gradient clipping threshold to 5 (Pascanu et al., 2013). We used batched training strategy of 32 samples. Dimensions of embedding and hidden layer for each task are listed in Table 3. In re-ranking step, we generate 5-best list from the baseline neural network model, as we did not observe a significant variance when changing the N-best list size. All language model features are trained on the training proportion of each dataset, with 8-gram wordbased setting and Kneser-Ney smoothing (Kneser and Ney, 1995) trained by SRILM toolkit (Stolcke, 2002). The results are reported with the best model, which is selected by the performance of validation set. The ensemble model is made up of four best models, which are trained using different random seed. Implementation is done with Theano (Theano Development Team, 2016) and Keras (Chollet, 2015), and all models are trained on Tesla K40 GPU. Embed. # units Hidden # units CNN News 384 256 CBTest NE 384 384 CBTest CN 384 256 Table 3: Embedding and hidden layer dimensions for each task. 5.2 Overall Results Our experiments are carried out on public datasets: CNN news datasets (Hermann et al., 2015) and CBTest NE/CN datasets (Hill et al., 2015). The statistics of these datasets are listed in Table 1, and the experimental results are given in Table 2. 598 As we can see that, our AoA Reader outperforms state-of-the-art systems by a large margin, where 2.3% and 2.0% absolute improvements over EpiReader in CBTest NE and CN test sets, which demonstrate the effectiveness of our model. Also by adding additional features in the re-ranking step, there is another significant boost 2.0% to 3.7% over AoA Reader in CBTest NE/CN test sets. We have also found that our single model could stay on par with the previous best ensemble system, and even we have an absolute improvement of 0.9% beyond the best ensemble model (Iterative Attention) in the CBTest NE validation set. When it comes to ensemble model, our AoA Reader also shows significant improvements over previous best ensemble models by a large margin and set up a new state-of-the-art system. To investigate the effectiveness of employing attention-over-attention mechanism, we also compared our model to CAS Reader, which used predefined merging heuristics, such as sum or avg etc. Instead of using pre-defined merging heuristics, and letting the model explicitly learn the weights between individual attentions results in a significant boost in the performance, where 4.1% and 3.7% improvements can be made in CNN validation and test set against CAS Reader. 5.3 Effectiveness of Re-ranking Strategy As we have seen that the re-ranking approach is effective in cloze-style reading comprehension task, we will give a detailed ablations in this section to show the contributions by each feature. To have a thorough investigation in the re-ranking step, we listed the detailed improvements while adding each feature mentioned in Section 4. From the results in Table 4, we found that the NE and CN category both benefit a lot from the re-ranking features, but the proportions are quite different. Generally speaking, in NE category, the performance is mainly boosted by the LMlocal feature. However, on the contrary, the CN category benefits from LMglobal and LMwc rather than the LMlocal. Also, we listed the weights of each feature in Table 5. The LMglobal and LMwc are all trained by training set, which can be seen as Global Feature. However, the LMlocal is only trained within the respective document part of test sample, which can be seen as Local Feature. η = LMglobal + LMwc LMlocal (13) CBTest NE CBTest CN Valid Test Valid Test AoA Reader 77.8 72.0 72.2 69.4 +Global LM 78.3 72.6 73.9 71.2 +Local LM 79.4 73.8 74.7 71.7 +Word-class LM 79.6 74.0 75.7 73.1 Table 4: Detailed results of 5-best re-ranking on CBTest NE/CN datasets. Each row includes all of the features from previous rows. LMglobal denotes the global LM, LMlocal denotes the local LM, LMwc denotes the word-class LM. CBTest NE CBTest CN NN 0.64 0.20 Global LM 0.16 0.10 Word-class LM 0.04 0.39 Local LM 0.16 0.31 RATIO η 1.25 1.58 Table 5: Weight of each feature in N-best reranking step. NN denotes the feature (probability) produced by baseline neural network model. We calculated the ratio between the global and local features and found that the NE category is much more dependent on local features than CN category. Because it is much more likely to meet a new named entity than a common noun in the test phase, so adding the local LM provides much more information than that of common noun. However, on the contrary, answering common noun requires less local information, which can be learned in the training data relatively. 6 Quantitative Analysis In this section, we will give a quantitative analysis to our AoA Reader. The following analyses are carried out on CBTest NE dataset. First, we investigate the relations between the length of the document and corresponding accuracy. The result is depicted in Figure 2. As we can see that the AoA Reader shows consistent improvements over AS Reader on the different length of the document. Especially, when the length of document exceeds 700, the improvements become larger, indicating that the AoA Reader is more capable of handling long documents. 599 18 486 758 525 370 262 61 AoA Reader AS Reader Accuracy 0.65 0.70 0.75 0.80 0.85 0.90 Length of Document 100 200 300 400 500 600 700 800 Figure 2: Test accuracy against the length of the document. The bar below the figure indicates the number of samples in each interval. 1071 588 354 264 127 59 28 8 1 1 AoA Reader AS Reader Accuracy 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Rank of the answer 1 2 3 4 5 6 7 8 9 10 Figure 3: Test accuracy against the frequency rank of the answer. The bar below the figure indicates the number of samples in each rank. Furthermore, we also investigate if the model tends to choose a high-frequency candidate than a lower one, which is shown in Figure 3. Not surprisingly, we found that both models do a good job when the correct answer appears more frequent in the document than the other candidates. This is because that the correct answer that has the highest frequency among the candidates takes up over 40% of the test set (1071 out of 2500). But interestingly we have also found that, when the frequency rank of correct answer exceeds 7 (less frequent among candidates), these models also give a relatively high performance. Empirically, we think that these models tend to choose extreme cases in terms of candidate frequency (either too high or too low). One possible reason is that it is hard for the model to choose a candidate that has a neutral frequency as the correct answer, because of its ambiguity (neutral choices are hard to made). 7 Related Work Cloze-style reading comprehension tasks have been widely investigated in recent studies. We will take a brief revisit to the related works. Hermann et al. (2015) have proposed a method for obtaining large quantities of ⟨D, Q, A⟩triples through news articles and its summary. Along with the release of cloze-style reading comprehension dataset, they also proposed an attention-based neural network to handle this task. Experimental results showed that the proposed neural network is effective than traditional baselines. Hill et al. (2015) released another dataset, which stems from the children’s books. Different from Hermann et al. (2015)’s work, the document and query are all generated from the raw story without any summary, which is much more general than previous work. To handle the reading comprehension task, they proposed a window-based memory network, and self-supervision heuristics is also applied to learn hard-attention. Unlike previous works, that using blended representations of document and query to estimate the answer, Kadlec et al. (2016) proposed a simple model that directly pick the answer from the document, which is motivated by the Pointer Network (Vinyals et al., 2015). A restriction of this model is that the answer should be a single word and appear in the document. Results on various public datasets showed that the proposed model is effective than previous works. Liu et al. (2016) proposed to exploit reading comprehension models to other tasks. They first applied the reading comprehension model into Chinese zero pronoun resolution task with automatically generated large-scale pseudo training data. The experimental results on OntoNotes 5.0 data showed that their method significantly outperforms various state-of-the-art systems. Our work is primarily inspired by Cui et al. (2016) and Kadlec et al. (2016) , where the latter model is widely applied to many follow-up works (Sordoni et al., 2016; Trischler et al., 2016; Cui et al., 2016). Unlike the CAS Reader (Cui et al., 2016), we do not assume any heuristics to our model, such as using merge functions: sum, avg etc. We used a mechanism called “attention600 over-attention” to explicitly calculate the weights between different individual document-level attentions, and get the final attention by computing the weighted sum of them. Also, we find that our model is typically general and simple than the recently proposed model, and brings significant improvements over these cutting edge systems. 8 Conclusion We present a novel neural architecture, called attention-over-attention reader, to tackle the clozestyle reading comprehension task. The proposed AoA Reader aims to compute the attentions not only for the document but also the query side, which will benefit from the mutual information. Then a weighted sum of attention is carried out to get an attended attention over the document for the final predictions. Among several public datasets, our model could give consistent and significant improvements over various state-of-theart systems by a large margin. The future work will be carried out in the following aspects. We believe that our model is general and may apply to other tasks as well, so firstly we are going to fully investigate the usage of this architecture in other tasks. Also, we are interested to see that if the machine really “comprehend” our language by utilizing neural networks approaches, but not only serve as a “document-level” language model. In this context, we are planning to investigate the problems that need comprehensive reasoning over several sentences. Acknowledgments We would like to thank all three anonymous reviewers for their thorough reviewing and providing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015409. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . Danqi Chen, Jason Bolton, and D. Christopher Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 2358–2367. https://doi.org/10.18653/v1/P161223. Colin Cherry and George Foster. 2012. Batch tuning strategies for statistical machine translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Montr´eal, Canada, pages 427–436. http://www.aclweb.org/anthology/N12-1047. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 1724–1734. http://aclweb.org/anthology/D14-1179. Franc¸ois Chollet. 2015. Keras. https://github. com/fchollet/keras. Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. 2016. Consensus attentionbased neural networks for chinese reading comprehension. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 1777– 1786. http://aclweb.org/anthology/C16-1167. Bhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549 . Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1684– 1692. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 . Franz Josef Och. 1999. An efficient method for determining bilingual word classes. In Ninth Conference of the European Chapter of the Association for Computational Linguistics. http://aclweb.org/anthology/E99-1010. Rudolf Kadlec, Martin Schmid, Ondˇrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 908– 918. https://doi.org/10.18653/v1/P16-1086. 601 Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In International Conference on Acoustics, Speech, and Signal Processing. pages 181–184 vol.1. Ting Liu, Yiming Cui, Qingyu Yin, Shijin Wang, Weinan Zhang, and Guoping Hu. 2016. Generating and exploiting large-scale pseudo training data for zero pronoun resolution. arXiv preprint arXiv:1606.01603 . Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. ICML (3) 28:1310–1318. Andrew M Saxe, James L McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 . Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hananneh Hajishirzi. 2016. Bi-directional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 . Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. 2016. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245 . Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958. Andreas Stolcke. 2002. Srilm — an extensible language modeling toolkit. In Proceedings of the 7th International Conference on Spoken Language Processing (ICSLP 2002). pages 901–904. Wilson L Taylor. 1953. Cloze procedure: a new tool for measuring readability. Journalism and Mass Communication Quarterly 30(4):415. Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688. http://arxiv.org/abs/1605.02688. Adam Trischler, Zheng Ye, Xingdi Yuan, Philip Bachman, Alessandro Sordoni, and Kaheer Suleman. 2016. Natural language comprehension with the epireader. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 128–137. http://aclweb.org/anthology/D161013. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. pages 2692–2700. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . 602
2017
55
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 603–612 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1056 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 603–612 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1056 Alignment at Work: Using Language to Distinguish the Internalization and Self-Regulation Components of Cultural Fit in Organizations Gabriel Doyle Department of Psychology Stanford University [email protected] Amir Goldberg Graduate School of Business Stanford University [email protected] Sameer B. Srivastava Haas School of Business UC Berkeley [email protected] Michael C. Frank Department of Psychology Stanford University [email protected] Abstract Cultural fit is widely believed to affect the success of individuals and the groups to which they belong. Yet it remains an elusive, poorly measured construct. Recent research draws on computational linguistics to measure cultural fit but overlooks asymmetries in cultural adaptation. By contrast, we develop a directed, dynamic measure of cultural fit based on linguistic alignment, which estimates the influence of one person’s word use on another’s and distinguishes between two enculturation mechanisms: internalization and selfregulation. We use this measure to trace employees’ enculturation trajectories over a large, multi-year corpus of corporate emails and find that patterns of alignment in the first six months of employment are predictive of individuals downstream outcomes, especially involuntary exit. Further predictive analyses suggest referential alignment plays an overlooked role in linguistic alignment. 1 Introduction Entering a new group is rarely easy. Adjusting to unfamiliar behavioral norms and donning a new identity can be cognitively and emotionally taxing, and failure to do so can lead to exclusion. But successful enculturation to the group often yields significant rewards, especially in organizational contexts. Fitting in has been tied to positive career outcomes such as faster time-to-promotion, higher performance ratings, and reduced risk of being fired (O’Reilly et al., 1991; Goldberg et al., 2016). A major challenge for enculturation research is distinguishing between internalization and selfregulation. Internalization, a more inwardly focused process, involves identifying as a group member and accepting group norms, while selfregulation, a more outwardly oriented process, entails deciphering the group’s normative code and adjusting one’s behavior to comply with it. Existing approaches, which generally rely on selfreports, are subject to various forms of reporting bias and typically yield only static snapshots of this process. Recent computational approaches that use language as a behavioral signature of group integration uncover dynamic traces of enculturation but cannot distinguish between internalization and self-regulation. To overcome these limitations, we introduce a dynamic measure of directed linguistic accommodation between a newcomer and existing group members. Our approach differentiates between an individual’s (1) base rate of word use and (2) linguistic alignment to interlocutors. The former corresponds to internalization of the group’s linguistic norms, whereas the latter reflects the capacity to regulate one’s language in response to peers’ language use. We apply this language model to a corpus of internal email communications and personnel records, spanning a seven-year period, from a mid-sized technology firm. We show that changes in base rates and alignment, especially with respect to pronoun use, are consistent with successful assimilation into a group and can predict eventual employment outcomes— continued employment, involuntary exit, or voluntary exit—at levels above chance. We use this predictive problem to investigate the nature of linguistic alignment. Our results suggest that the common formulation of alignment as a lexical-level phenomenon is incomplete. 603 2 Linguistic Alignment and Group Fit Linguistic alignment Linguistic alignment is the tendency to use the same or similar words as one’s conversational partner. Alignment is an instance of a widespread and socially important human behavior: communication accommodation, the tendency of two interacting people to nonconsciously adopt similar behaviors. Evidence of accommodation appears in many behavioral dimensions, including gestures, postures, speech rate, self-disclosure, and language or dialect choice (see Giles et al. (1991) for a review). More accommodating people are rated by their interlocutors as more intelligible, attractive, and cooperative (Feldman, 1968; Ireland et al., 2011; Triandis, 1960). These perceptions have material consequences—for example, high accommodation requests are more likely to be fulfilled, and pairs who accommodate more in how they express uncertainty perform better in lab-based tasks (Buller and Aune, 1988; Fusaroli et al., 2012). Although accommodation is ubiquitous, individuals vary in their levels of accommodation in ways that are socially informative. Notably, more powerful people are accommodated more strongly in many settings, including trials (Gnisci, 2005), online forums (Danescu-Niculescu-Mizil et al., 2012), and Twitter (Doyle et al., 2016). Most relevant for this work, speakers may increase their accommodation to signal camaraderie or decrease it to differentiate from the group. For example, Bourhis and Giles (1977) found that Welsh English speakers increased their use of the Welsh accent and language in response to an English speaker who dismissed it. Person-group fit and linguistic alignment These findings suggest that linguistic alignment is a useful avenue for studying how people assimilate into a group. Whereas traditional approaches to studying person-group fit rely on self-reports that are subject to various forms of reporting bias and cannot feasibly be collected with high granularity across many points in time, recent studies have proposed language-based measures as a means to tracing the dynamics of person-group fit without having to rely on self-reports. Building on Danescu-Niculescu-Mizil et al. (2013)’s research into language use similarities as a proxy for social distance between individuals, Srivastava et al. (forthcoming) and Goldberg et al. (2016) developed a measure of cultural fit based on the similarity in linguistic style between individuals and their colleagues in an organization. Their timevarying measure highlights linguistic compatibility as an important facet of cultural fit and reveals distinct trajectories of enculturation for employees with different career outcomes. While this approach can help uncover the dynamics and consequences of an individual’s fit with her colleagues in an organization, it cannot disentangle the underlying reasons for this alignment. For two primary reasons, it cannot distinguish between fit that arises from internalization and fit produced by self-regulation. First, Goldberg et al. (2016) and Srivastava et al. (forthcoming) define fit using a symmetric measure, the Jensen-Shannon divergence, which does not take into account the direction of alignment. Yet the distinction between an individual adapting to peers versus peers adapting to the individual would appear to be consequential. Second, this prior work considers fit across a wide range of linguistic categories but does not interrogate the role of particular categories, such as pronouns, that can be especially informative about enculturation. For example, a person’s base rate use of the first-person singular (I) or plural (we) might indicate the degree of group identity internalization, whereas adjustment to we usage in response to others’ use of the pronoun might reveal the degree of self-regulation to the group’s normative expectations. Modeling fit with WHAM To address these limitations, we build upon and extend the WHAM alignment framework (Doyle and Frank, 2016) to analyze the dynamics of internalization and selfregulation using the complete corpus of email communications and personnel records from a mid-sized technology company over a seven-year period. WHAM uses a conditional measure of alignment, separating overall homophily (unconditional similarity in people’s language use, driven by internalized similarity) from in-the-moment adaptation (adjusting to another’s usage, corresponding to self-regulation). WHAM also provides a directed measure of alignment, in that it estimates a replier’s adaptation to the other conversational participant separately from the participant’s adaptation to the replier. Level(s) of alignment The convention within linguistic alignment research, dating back to early 604 work on Linguistic Style Matching (Niederhoffer and Pennebaker, 2002), is to look at lexical alignment: the repetition of the same or similar words across conversation participants. From a communication accommodation standpoint, this is justified by assuming that one’s choice of words represents a stylistic signal that is partially independent of the meaning one intends to express—similar to the accommodation on paralinguistic signals discussed above. The success of previous linguistic alignment research shows that this is valid. However, words are difficult to divorce from their meanings, and sometimes repeating a word conflicts with repeating its referent. In particular, pronouns often refer to different people depending on who uses the pronoun. While there is evidence that one person using a first-person singular pronoun increases the likelihood that her conversation partner will as well (Chung and Pennebaker, 2007), we may also expect that one person using first-person singular pronouns may cause the other to use more second-person pronouns, so that both people are referring to the same person. This is especially important under the Interactive Alignment Model view (Pickering and Garrod, 2004), where conversants align their entire mental representations, which predicts both lexical and referential alignment behaviors will be observed. Discourse-strategic explanations for alignment also predict alignment at multiple levels (Doyle and Frank, 2016). Since we have access to a high-quality corpus with meaningful outcome measures, we can investigate the relative importance of these two types of alignment. We will show that referential alignment is more predictive of employment outcomes than is lexical alignment, suggesting a need for alignment research to consider both levels rather than just the latter. 3 Data: Corporate Email Corpus We use the complete corpus of internal emails exchanged among full-time employees at a midsized US-based technology company between 2009 to 2014 (Srivastava et al., forthcoming). Each email was summarized as a count of word categories in its text. These categories are a subset of the Linguistic Information and Word Count system (Pennebaker et al., 2007). The categories were chosen because they are likely to be indicative of one’s standing/role within a group.1 We divided email chains into message-reply pairs to investigate conditional alignment between a message and its reply. To limit these pairs to cases where the reply was likely related to the preceding message, we removed all emails with more than one sender or recipient (including CC/BCC), identical sender and recipient, or where the sender or recipient was an automatic notification system or any other mailbox that was not specific to a single employee. We also excluded emails with no body text or more than 500 words in the body text, and pairs with more than a week’s latency between message and reply. Finally, because our analyses involve enculturation dynamics over the first six months of employment, we excluded replies sent by an employee whose overall tenure was less than six months. This resulted in a collection of 407,779 messagereply pairs, with 485 distinct replying employees. We combined this with monthly updates of employees joining and leaving the company and whether they left voluntarily or involuntarily. Of the 485, 66 left voluntarily, 90 left involuntarily, and 329 remained employed at the end of the observation period. Privacy protections and ethical considerations Research based on employees’ archived electronic communications in organizational settings poses potential threats to employee privacy and company confidentiality. To address these concerns, and following established ethical guidelines for the conduct of such research (Borgatti and Molina, 2003), we implemented the following procedures: (a) raw data were stored on secure research servers behind the company’s firewall; (b) messages exchanged with individuals outside the firm were eliminated; (c) all identifying information such as email addresses was transformed into hashed identifiers, with the company retaining access to the key code linking identifying information to hashed identifiers; and (d) raw message content was transformed into linguistic categories so that identities could not be inferred from message content. Per terms of the non-disclosure agreement we signed with the firm, we are not able to share the data underlying the analyses reported below. 1Six pronoun categories (first singular (I), first plural (we), second (you), third singular personal (he, she), third singular impersonal (it, this), and third plural (they)) and five time/certainty categories (past tense, present tense, future tense, certainty, and tentativity). 605 We can, however, share the code and dummy test data, both of which can be accessed at http: //github.com/gabedoyle/acl2017. 4 Model: An Extended WHAM Framework To assess alignment, we use the Word-Based Hierarchical Alignment Model (WHAM) framework (Doyle and Frank, 2016). The core principle of WHAM is that alignment is a change, usually an increase, in the frequency of using a word category in a reply when the word category was used in the preceding message. For instance, a reply to the message What will we discuss at the meeting?, is likely to have more instances of future tense than a reply to the message What did we discuss at the meeting? Under this definition, alignment is the log-odds shift from the baseline reply frequency, the frequency of the word in a reply when the preceding message did not contain the word. WHAM is a hierarchical generative modeling framework, so it uses information from related observations (e.g., multiple repliers with similar demographics) to improve its robustness on sparse data (Doyle et al., 2016). There are two key parameters, shown in Figure 2: ηbase, the log-odds of a given word category c when the preceding message did not contain c, and ηalign, the increase in the log-odds of c when the preceding message did contain c. A dynamic extension To understand enculturation, we need to track changes in both the alignment and baseline over time. We add a month-bymonth change term to WHAM, yielding a piecewise linear model of these factors over the course of an employee’s tenure. Each employee’s tenure is broken into two or three segments: their first six months after being hired, their last six months before leaving (if they leave), and the rest of their tenure.2 The linear segments for their alignment are fit as an intercept term ηalign, based at their first month (for the initial period) or their last month (for the final period), and per-month slopes α. Baseline segments are fit similarly, with parameters ηbase and β.3 To visualize the align2Within each segment, the employee’s alignment model is similar to that of Yurovsky et al. (2016), who introduced a constant by-month slope parameter to model changes in parent-child alignment during early linguistic development. 3The six month timeframe was chosen as previous research has found it to be a critical period for early enculturation (Bauer et al., 1998). Pilot investigations into the change align base 0.1 0.2 0.3 0.4 0.5 −2.7 −2.6 −2.5 −2.4 Time Log−odds parameter estimates ηalign start ηalign mid ηalign end αend αstart βstart βend ηbase mid ηbase start ηbase end Figure 1: Sample sawhorse plot with key variables labelled. The η point parameters (first month, last month, and middle average) and α (or β) bymonth slope (start, end) parameters are estimated by WHAM for each word category and employee group. ment behaviors and the parameter values, we create “sawhorse” plots, with an example in Figure 1. In our present work, we are focused on changes in cultural fit during the transitions into or out of the group, so we collapse observations outside the first/last six months into a stable point estimate, constraining their slopes to be zero. This simplification also circumvents the issue of different employees having different middle-period lengths.4 Model structure The graphical model for our instantiation of WHAM is shown in Figure 2. For each word category c, WHAM’s generative model represents each reply as a series of tokenby-token independent draws from a binomial distribution. The binomial probability µ is dependent on whether the preceding message did (µalign) or did not (µbase) contain a word from category c, and the inferred alignment value is the difference between these probabilities in log-odds space (ηalign). The specific values of these variables depend on three hierarchical features: the word category c, the group g that a given employee falls into, and the time period t (a piece of the piece-wise in baseline usage over time showed roughly linear changes over the first/last six months, but our linearity assumption may mask interesting variation in the enculturation trajectories. 4As shown in Figure 1, the pieces do not need to define a continuous function. Alignment behaviors continue to change in the middle of an employee’s tenure (Srivastava et al., forthcoming), so alignment six months in to the job is unlikely to be equal to alignment six months from leaving, or the average alignment over the middle tenure. 606 C N N N αg,t αc,g,t ηalign c ηalign c,g,t ηalign c,g,t,m µalign c,g,t,m Calign c,g,t,m ηbase c ηbase c,g,t ηbase c,g,t,m µbase c,g,t,m Cbase c,g,t,m βg,t βc,g,t N N N N logit−1 Binom N N logit−1 Binom N base c,g,t,m N align c,g,t,m m month group, time category Figure 2: The Word-Based Hierarchical Alignment Model (WHAM). Hierarchical chains of normal distributions capture relationships between word categories, individuals, outcome groups, and time, and generate linear predictors η, which are converted into probabilities µ for binomial draws of the words in replies. linear function: beginning, middle, or end). Note that the hierarchical ordering is different for the η chains and the α/β chains; c is above g and t for the η chains, but below them for the α/β chains. This is because we expect the static (η) values for a given word category to be relatively consistent across different groups and at different times, but we expect the values to be independent across the different word categories. Conversely, we expect that the enculturation trajectories across word categories (α/β) will be similar, while the trajectories may vary substantially across different groups and different times. Lastly, the month m in which a reply is written (measured from the start of the time period t) has a linear effect on the η value, as described below. To estimate alignment, we first divide the replies up by group, time period, and calendar month. We separate the replies into two sets based on whether the preceding message contained the category c (the “alignment” set) or not (the “baseline” set). All replies within a set are then aggregated in a single bag-of-words representation, with category token counts Calign c,g,t,m and Cbase c,g,t,m, and total token counts Nbase c,g,t,m and Nbase c,g,t,m comprising the observed variables on the far right of the model. Moving from right to left, these counts are assumed to come from binomial draws with probability µalign c,g,t,m or µbase c,g,t,m. The µ values are then in turn generated from η values in log-odds space by an inverse-logit transform, similar to linear predictors in logistic regression. The ηbase variables are representations of the baseline frequency of a marker in log-odds space, and µbase is simply a conversion of ηbase to probability space, the equivalent of an intercept term in a logistic regression. ηalign is an additive value, with µalign = logit−1(ηbase + ηalign), the equivalent of a binary feature coefficient in a logistic regression. The specific month’s η variables are calculated as a linear function: ηalign c,g,t,m = ηalign c,g,t + mαc,g,t, and similarly with β for the baseline. The remainder of the model is a hierarchy of normal distributions that integrate social structure into the analysis. In the present work, we have three levels in the hierarchy: category, group, and time period. In Analysis 1, employees are grouped by their employment outcome (stay, leave voluntarily, leave involuntarily); in Analyses 2 & 3, where we predict the employment outcomes, each group is a single employee. The normal distributions that connect these levels have identical standard deviations σ2 = .25.5 The hierarchies 5The deviation is not a theoretically motivated choice, and was chosen as a good empirical balance between reasonable parameter convergence (improved by smaller σ2) and good model log-probability (improved by larger σ2). 607 are headed by a normal distribution centered at 0, except for the ηbase hierarchy, which has a Cauchy(0, 2.5) distribution.6 Message and reply length can affect alignment estimates; the WHAM model was developed in part to reduce this effect. As different employees had different email length distributions, we further accounted for length by dividing all replies into five quintile length bins, and treated each bin as separate observations for each employee. This design choice adds an additional control factor, but results were qualitatively similar without it. All of our analyses are based on parameter estimates from RStan fits of WHAM with 500 iterations over four chains. While previous research on cultural fit has emphasized either its internalization (O’Reilly et al., 1991) or self-regulation (Goldberg et al., 2016) components, our extension to the WHAM framework helps disentangle them by estimating them as separate baseline and alignment trajectories. For example, we can distinguish between an archetypal individual who initially aligns to her colleagues and then internalizes this style of communication such that her baseline use also shifts and another archetypal person who aligns to her colleagues but does not change her baseline usage. The former exhibits high correspondence between internalization and self-regulation, whereas the latter demonstrates an ability to decouple them. 5 Analyses We perform three analyses on this data. First, we examine the qualitative behaviors of pronoun alignment and how they map onto employee outcomes in the data. Second, we show that these qualitative differences in early enculturation are meaningful, with alignment behaviors predicting employment outcome above chance. Lastly, we consider lexical versus referential levels of alignment and show that predictions are improved under the referential formulation, suggesting that alignment is not limited to low-level wordrepetition effects. 6As ηbase is the log-odds of each word in a reply being a part of the category c, it is expected to be substantially negative. For example, second person pronouns (you), are around 2% of the words in replies, approximately −4 in log-odds space. We follow Gelman et al. (2008)’s recommendation of the Cauchy prior as appropriate for parameter estimation in logistic regression. I You We align base 0.0 0.2 0.4 0.6 −4.5 −4.0 −3.5 Time Log−odds parameter estimates Figure 3: Sawhorse plots showing the dynamics of pronoun alignment behavior across employees. Vertical axis shows log-odds for baseline and alignment. Top row shows estimated alignment, highest for we and smallest for you. Bottom row shows baseline dynamics, with employees shifting toward the average usage as they enculturate. The shaded region is one standard deviation over parameter samples. 5.1 Analysis 1: Dynamic Qualitative Changes We begin with descriptive analyses of the behavior of pronouns, which are likely to reflect incorporation into the company. In particular, we look at first-person singular (I), first-person plural (we), and second-person pronouns (you). We expect that increases in we usage will occur as the employee is integrated into the group, while I and you usage will decrease, and want to understand whether these changes manifest on baseline usage (i.e., internalization), alignment (i.e., self-regulation), or both. Design We divided each employee’s emails by calendar month, and separated them into the employee’s first six months, their last six months (if an employee left the company within the observation period), and the middle of their tenure. Employees with fewer than twelve months at the company were excluded from this analysis, so that their first and last months did not overlap. We fit two WHAM models in this analysis. The first aggregated all employees, regardless of employment outcome, to minimize noise; the second separated them by outcome to analyze cultural fit differences. Outcome-aggregated model We start with the aggregated behavior of all employees, shown in Figure 3. For baselines, we see decreased use of I 608 I You We align base −0.2 0.0 0.2 0.4 0.6 0.8 −5.0 −4.5 −4.0 −3.5 Time Log−odds parameter estimates Outcome: invol stay vol Figure 4: Sawhorse plots split by employment outcome. Mid-tenure points are jittered for improved readability. and you over the first six months, with we usage increasing over the same period, confirming the expected result that incorporating into the group is accompanied by more inclusive pronoun usage. Despite the baseline changes, alignment is fairly stable through the first six months. Alignment on first-person singular and second-person pronouns is lower than first-person plural pronouns, likely due to the fact that I or you have different referents when used by the two conversants, while both conversants could use we to refer to the same group. We will consider this referential alignment in more detail in Analysis 3. Since employees with different outcomes have much different experiences over their last six months, we will not discuss them in aggregate, aside from noting the sharp decline in we alignment near the end of the employees’ tenures. Outcome-separated model Figure 4 shows outcome-specific trajectories, with green lines showing involuntary leavers (i.e., those who are fired or downsized), blue showing voluntary leavers, and orange showing employees who remained at the company through the final month of the data. The use of I and you is similar to the aggregates in Figure 3, regardless of group. The last six months of I usage show an interesting difference, where involuntary leavers align more on I but retain a stable baseline while voluntary leavers retain a stable alignment but increase I overall, which is consistent with group separation. The most compelling result we see here, though, is the changes in we usage by different groups of employees. Employees who eventually leave the company involuntarily show signs of more selfregulation than internalization over the first six months, increasing their alignment while decreasing their baseline use (though they return to more similar levels as other employees later in their tenure). Employees who stay at the company, as well as those who later leave voluntarily, show signs of internalization, increasing their baseline usage to the company average, as well as adapting their alignment levels to the mean. This finding suggests that how quickly the employees internalize culturally-standard language use predicts their eventual employment outcome, even if they eventually end up near the average. 5.2 Analysis 2: Predicting Outcomes This analysis tests the hypothesis that there are meaningful differences in employees’ initial enculturation, captured by alignment behaviors. We examine the first six months of communications and attempt to predict whether the employee will leave the company. We find that, even with a simple classifier, alignment behaviors are predictive of employment outcome. Design We fit the WHAM model to only the first six months of email correspondence for all employees who had at least six months of email. The model estimated the initial level of baseline use (ηbase) and alignment (ηalign) for each employee, as well as the slope (α, β) for baseline and alignment over those first six months, over all 11 word categories mentioned in Section 3. We then created logistic regression classifiers, using the parameter estimates to predict whether an employee would leave the company. We fit separate classifiers for leaving voluntarily or involuntarily. Our results show that early alignment behaviors are better at identifying employees who will leave involuntarily than voluntarily, consistent with Srivastava et al.’s (forthcoming) findings that voluntary leavers are similar to stayers until late in their tenure. We fit separate classifiers using the alignment parameters and the baseline parameters to investigate their relative informativity. For each model, we report the area under the curve (AUC). This value is estimated from the receiver operating characteristic (ROC) curve, which plots the true positive rate against the false positive rate over different classification thresholds. An AUC of 0.5 represents chance performance. We use balanced, stratified cross609 validation to reduce AUC misestimation due to unbalanced outcome frequencies and high noise (Parker et al., 2007). Results The left column of Figure 5 shows the results over 10 runs of 10-fold balanced logistic classifiers with stratified cross-validation in R. The alignment-based classifiers are both above chance at predicting that an employee will leave the company, whether involuntarily or voluntarily. The baseline-based classifiers perform worse, especially on voluntary leavers. This finding is consistent with the idea that voluntary leavers resemble stayers (who form the bulk of the employees) until late in their tenure when their cultural fit declines. We fit a model using both alignment and baseline parameters, but this model yielded an AUC value below the alignment-only classifier. This suggests that where alignment and baseline behaviors are both predictive, they do not provide substantially different predictive power and lead to overfitting. A more sophisticated classifier may overcome these challenges; our goal here was not to achieve maximal classification performance but to test whether alignment provided any useful information about employment outcomes. 5.3 Analysis 3: Types of Alignment Our final analysis investigates the nature of linguistic alignment: specifically, whether there is an effect of referential alignment beyond that of the more commonly used lexical alignment. Testing this hypothesis requires a small change to the alignment calculations. Lexical alignment is based on the conditional probability of the replier using a word category c given that the preceding message used that same category c. For referential alignment, we examine the conditional probability of the replier using a word category cj given that the preceding message used the category ci, where ci and cj are likely to be referentially linked. We also consider cases where ci is likely to transition to cj throughout the course of the conversation, such as present tense verbs turning into past tense as the event being described recedes into the past. The pairs of categories that are likely to be referentially or transitionally linked are: (you, I); (we, I); (you, we); (past, present); (present, future); and (certainty, tentativity). We include both directions of these pairs, so this provides approximately the same number of predictor variables for both situalexical referential invol vol align base align base 0.50 0.54 0.58 0.62 0.50 0.54 0.58 0.62 Parameter set Classifier AUC Figure 5: AUC values for 10 runs of 10-fold crossvalidated logistic classifiers, with 95% confidence intervals on the mean AUC. Both lexical (left column) and referential (right column) alignment parameters lead to above chance classifier performance, but referential alignment outperforms lexical alignment at predicting both voluntary and involuntary departures. tions to maximize comparability (12 for the referential alignments, 11 for the lexical). This modification does not change the structure of the WHAM model, but rather changes its C and N counts by reclassifying replies between the baseline or alignment pathways. Results Figure 5 plots the differences in predictive model performance using lexical versus referential alignment parameters. We find that the semantic parameters provide more accurate classification than the lexical both for voluntarily and involuntarily-leaving employees. This suggests that while previous work looking at lexical alignment successfully captures social structure, referential alignment may reflect a deeper and more accurate representation of the social structure. It is unclear if this behavior holds in less formal situations or with weaker organizational structure and shared goals, but these results suggest that the traditional alignment approach of only measuring lexical alignment should be augmented with referential alignment measures for a more complete analysis. 6 Discussion A key finding from this work is that pronoun usage behaviors in employees’ email communication are consistent with social integration into the group; employees use “I” pronouns less and 610 “we” pronouns more as they integrate. Furthermore, we see the importance of using an alignment measure such as WHAM for distinguishing the base rate and alignment usage of words. Employees who leave the company involuntarily show increased “we” usage through greater alignment, using “we” more when prompted by a colleague, but introducing it less of their own accord. This suggests that these employees do not feel fully integrated into the group, although they are willing to identify as a part of it when a more fully-integrated group member includes them, corresponding to self-regularization over internalization. The fact that these alignment measures alone, without any job productivity or performance metrics, have some predictive capability for employees’ leaving the company suggests the potential for support or intervention programs to help highperforming but poorly-integrated employees integrate into the company better. More generally, the prominence of pronominally-driven communication changes suggest that alignment analyses can provide insight into a range of social integration settings. This may be especially helpful in cases where there is great pressure to integrate smoothly, and people would be likely to adopt a self-regulating approach even if they do not internalize their group membership. Such settings not only include the high-stakes situation of keeping one’s job, but of transitioning from high school to college or moving to a new country or region. Maximizing the chances for new members to become comfortable within a group is critical both for spreading useful aspects of the group’s existing culture to new members and for integrating new ideas from the new members’ knowledge and practices. Alignment-based approaches can be a useful tool in separating effective interventions that cause internalization of the group dynamics from those that lead to more superficial self-regularization changes. 7 Conclusions This paper described an effort to use directed linguistic alignment as a measure of cultural fit within an organization. We adapted a hierarchical alignment model from previous work to estimate fit within corporate email communications, focusing on changes in language during employees’ entry to and exit from the company. Our results showed substantial changes in the use of pronouns, with pronoun patterns varying by employees’ outcomes within the company.The use of the firstperson plural “we” during an employee’s first six months is particularly instructive. Whereas stayers exhibited increased baseline use, indicating internalization, those eventually departing involuntarily were on the one hand decreasingly likely to introduce “we” into conversation, but increasingly responsive to interlocutors’ use of the pronoun. While not internalizing a shared identity with their peers, involuntarily departed employees were overly self-regulating in response to its invocation by others. Quantitatively, rates of usage and alignment in the first six months of employment carried information about whether employees left involuntarily, pointing towards fit within the company culture early on as an indicator of eventual employment outcomes. Finally, we saw ways in which the application of alignment to cultural fit might help to refine ideas about alignment itself: preliminary analysis suggested that referential, rather than lexical, alignment was more predictive of employment outcomes. More broadly, these results suggest ways that quantitative methods can be used to make precise application of concepts like “cultural fit” at scale. 8 Acknowledgments This work was supported by NSF Grant #1456077; The Garwood Center for Corporate Innovation at the Haas School of Business, University of California, Berkeley; the Stanford Data Science Initiative; and the Stanford Graduate School of Business. 611 References Talya N. Bauer, Elizabeth Wolfe Morrison, and Ronda Roberts Callister. 1998. Socialization research: A review and directions for future research. In Research in Personnel and Human Resources Management, Emerald Group, Bingley, UK, volume 16, pages 149–214. Stephen P. Borgatti and Jos´e Luis Molina. 2003. Ethical and strategic issues in organizational social network analysis. The Journal of Applied Behavioral Science 39(3):337–349. Richard Y. Bourhis and Howard Giles. 1977. The language of intergroup distinctiveness. In H Giles, editor, Language, Ethnicity, and Intergroup Relations, Academic Press, London, pages 119–135. David B. Buller and R. Kelly Aune. 1988. The effects of vocalics and nonverbal sensitivity on compliance: A speech accommodation theory explanation. Human Communication Research 14:301–32. Cindy Chung and James W. Pennebaker. 2007. The psychological functions of function words. In K Fiedler, editor, Social communication, Psychology Press, New York, chapter 12, pages 343–359. Cristian Danescu-Niculescu-Mizil, Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power: Language effects and power differences in social interaction. In Proceedings of the 21st international conference on World Wide Web. page 699. https://doi.org/10.1145/2187836.2187931. Cristian Danescu-Niculescu-Mizil, Robert West, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. No country for old members: user lifecycle and linguistic change in online communities. In Proceedings of the 22nd International Conference on World Wide Web. pages 307–318. Gabriel Doyle and Michael C. Frank. 2016. Investigating the sources of linguistic alignment in conversation. In Proceedings of ACL. Gabriel Doyle, Dan Yurovsky, and Michael C. Frank. 2016. A robust framework for estimating linguistic alignment in Twitter conversations. In Proceeedings of WWW. R. E. Feldman. 1968. Response to compatriots and foriegners who seek assistance. Journal of Personality and Social Psychology 10:202–14. Riccardo Fusaroli, Bahador Bahrami, Karsten Olsen, Andreas Roepstorff, Geraint Rees, Chris Frith, and Kristian Tyl´en. 2012. Coming to Terms: Quantifying the Benefits of Linguistic Coordination. Psychological Science 23(8):931–939. https://doi.org/10.1177/0956797612436816. Andrew Gelman, Aleks Jakulin, Maria Grazia Pittau, and Yu-Sung Su. 2008. A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics . Howard Giles, Nikolas Coupland, and Justine Coupland. 1991. Accommodation theory: Communication, context, and consequences. In Howard Giles, Justine Coupland, and Nikolas Coupland, editors, Contexts of accommodation: Developments in applied sociolinguistics, Cambridge University Press, Cambridge. Augusto Gnisci. 2005. Sequential strategies of accommodation: A new method in courtroom. British Journal of Social Psychology 44(4):621–643. Amir Goldberg, Sameer B. Srivastava, V. Govind Manian, and Christopher Potts. 2016. Fitting in or standing out? The tradeoffs of structural and cultural embeddedness. American Sociological Review . Molly E. Ireland, Richard B. Slatcher, Paul W. Eastwick, Lauren E. Scissors, Eli J. Finkel, and James W. Pennebaker. 2011. Language style matching predicts relationship initiation and stability. Psychological Science 22:39–44. https://doi.org/10.1177/0956797610392928. Kate G. Niederhoffer and James W. Pennebaker. 2002. Linguistic style matching in social interaction. Journal of Language and Social Psychology 21(4):337– 360. http://jls.sagepub.com/content/21/4/337.short. Charles A. O’Reilly, Jennifer Chatman, and David F. Caldwell. 1991. People and organizational culture: a profile comparison approach to assessing personorganization fit. Academy of Management Journal 34(3):487–516. Brian J Parker, Simon G¨unter, and Justin Bedo. 2007. Stratification bias in low signal microarray studies. BMC bioinformatics 8(1):1. James W. Pennebaker, Cindy K. Chung, Molly Ireland, Amy Gonzalez, and Roger J. Booth. 2007. The development and psychometric properties of liwc2007. Technical report, LIWC.net. Martin J. Pickering and Simon Garrod. 2004. Toward a mechanistic psychology of dialogue. Behavioral and brain sciences 27(2):169–190. https://doi.org/10.1017/S0140525X04000056. Sameer B. Srivastava, Amir Goldberg, V. Govind Manian, and Christopher Potts. forthcoming. Enculturation trajectories: Language, cultural adaptation, and individual outcomes in organizations. Management Science . Harry C. Triandis. 1960. Cognitive similarity and communication in a dyad. Human Relations 13:175– 183. Dan Yurovsky, Gabriel Doyle, and Michael C. Frank. 2016. Linguistic input is tuned to children’s developmental level. In Proceedings of the 38th Annual Meeting of the Cognitive Science Society. 612
2017
56
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 613–622 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1057 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 613–622 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1057 Representations of language in a model of visually grounded speech signal Grzegorz Chrupała Tilburg University [email protected] Lieke Gelderloos Tilburg University [email protected] Afra Alishahi Tilburg University [email protected] Abstract We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaningbased linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of formrelated aspects of the language input tends to initially increase and then plateau or decrease. 1 Introduction Speech recognition is one of the success stories of language technology. It works remarkably well in a range of practical settings. However, this success relies on the use of very heavy supervision where the machine is fed thousands of hours of painstakingly transcribed audio speech signal. Humans are able to learn to recognize and understand speech from notably weaker and noisier supervision: they manage to learn to extract structure and meaning from speech by simply being exposed to utterances situated and grounded in their daily sensory experience. Modeling and emulating this remarkable skill has been the goal of numerous studies; however in the overwhelming majority of cases researchers used severely simplified settings where either the language input or the extralinguistic sensory input, or both, are small scale and symbolically represented. Section 2 provides a brief overview of this research. More recently several lines of work have moved towards more realistic inputs while modeling or emulating language acquisition in a grounded setting. Gelderloos and Chrupała (2016) use the image captioning dataset MS COCO (Lin et al., 2014) to mimic the setting of grounded language learning: the sensory input consists of images of natural scenes, while the language input are phonetically transcribed descriptions of these scenes. The use of such moderately large and low-level data allows the authors to train a multi-layer recurrent neural network model, and to explore the nature and localization of the emerging hierarchy of linguistic representations learned in the process. Furthermore, in a series of recent studies Harwath and Glass (2015); Harwath et al. (2016); Harwath and Glass (2017) use image captioning datasets to model learning to understand spoken language from visual context with convolutional neural network models. Finally, there is a small but growing body of work dedicated to elucidating the nature of representations learned by neural networks from language data (see Section 2.2 for a brief overview). In the current work we build on these three strands of research and contribute the following advances: • We use a multi-layer gated recurrent neural network to properly model the temporal nature of speech signal and substantially improve performance compared to the convolutional architecture from Harwath and Glass (2015); • We carry out an in-depth analysis of the representations used by different components of the trained model and correlate them to representations learned by a text-based model and to human patterns of judgment on linguistic stimuli. This analysis is especially novel for a model with speech signal as input. The general pattern of findings in our analysis is 613 as follows: The model learns to extract from the acoustic input both form-related and semanticsrelated information, and encodes it in the activations of the hidden layers. Encoding of semantic aspects tends to become richer as we go up the hierarchy of layers. Meanwhile, encoding of formrelated aspects of the language input, such as utterance length or the presence of specific words, tends to initially increase and then decay. We release the code for our models and analyses as open source, available at https://github.com/gchrupala/visually-groundedspeech. We also release a dataset of synthetically spoken image captions based on MS COCO, available at https://doi.org/10.5281/zenodo.400926. 2 Related work Children learn to recognize and assign meaning to words from continuous perceptual data in extremely noisy context. While there have been many computational studies of human word meaning acquisition, they typically make strong simplifying assumptions about the nature of the input. Often language input is given in the form of word symbols, and the context consists of a set of symbols representing possible referents (e.g. Siskind, 1996; Frank et al., 2007; Fazly et al., 2010). In contrast, several studies presented models that learn from sensory rather than symbolic input, which is rich with regards to the signal itself, but very limited in scale and variation (e.g. Roy and Pentland, 2002; Yu and Ballard, 2004; Lazaridou et al., 2016). 2.1 Multimodal language acquisition Chrupała et al. (2015) introduce a model that learns to predict the visual context from image captions. The model is trained on image-caption pairs from MSCOCO (Lin et al., 2014), capturing both rich visual input as well as larger scale input, but the language input still consists of word symbols. Gelderloos and Chrupała (2016) propose a similar architecture that instead takes phonemelevel transcriptions as language input, thereby incorporating the word segmentation problem into the learning task. In this work, we introduce an architecture that learns from continuous speech and images directly. This work is related to research on visual grounding of language. The field is large and growing, with most work dedicated to the grounding of written text, particularly in image captioning tasks (see Bernardi et al. (2016) for an overview). However, learning to ground language to visual information is also interesting from an automatic speech recognition point of view. Potentially, ASR systems could be trained from naturally co-occurring visual context information, without the need for extensive manual annotation – a particularly promising prospect for speech recognition in low-resource languages. There have been several attempts along these lines. Synnaeve et al. (2014) present a method of learning to recognize spoken words in isolation from cooccurrence with image fragments. Harwath and Glass (2015) present a model that learns to map pre-segmented spoken words in sequence to aspects of the visual context, while in Harwath and Glass (2017) the model also learns to recognize words in the unsegmented signal. Most closely related to our work is that of Harwath et al. (2016), as it presents an architecture that learns to project images and unsegmented spoken captions to the same embedding space. The sentence representation is obtained by feeding the spectrogram to a convolutional network. The architecture is trained on crowd-sourced spoken captions for images from the Places dataset (Zhou et al., 2014), and evaluated on image search and caption retrieval. Unfortunately this dataset is not currently available and we were thus unable to directly compare the performance of our model to Harwath et al. (2016). We do compare to Harwath and Glass (2015) which was tested on a public dataset. We make different architectural choices, as our models are based on recurrent highway networks (Zilly et al., 2016). As in human cognition, speech is processed incrementally. This also allows our architecture to integrate information sequentially from speech of arbitrary duration. 2.2 Analysis of neural representations While analysis of neural methods in NLP is often limited to evaluation of the performance on the training task, recently methods have been introduced to peek inside the black box and explore what it is that enables the model to perform the task. One approach is to look at the contribution of specific parts of the input, or specific units in the model, to final representations or decisions. K´ad´ar et al. (2016) propose omission scores, a method to estimate the contribution of input tokens to the fi614 nal representation by removing them from the input and comparing the resulting representations to the ones generated by the original input. In a similar approach, Li et al. (2016) study the contribution of individual input tokens as well as hidden units and word embedding dimensions by erasing them from the representation and analyzing how this affects the model. Miao et al. (2016) and Tang et al. (2016) use visualization techniques for fine-grained analysis of GRU and LSTM models for ASR. Visualization of input and forget gate states allows Miao et al. (2016) to make informed adaptations to gated recurrent architectures, resulting in more efficiently trainable models. Tang et al. (2016) visualize qualitative differences between LSTM- and GRUbased architectures, regarding the encoding of information, as well as how it is processed through time. We specifically study linguistic properties of the information encoded in the trained model. Adi et al. (2016) introduce prediction tasks to analyze information encoded in sentence embeddings about word order, sentence length, and the presence of individual words. We use related techniques to explore encoding of aspects of form and meaning within components of our stacked architecture. 3 Models We use a multi-layer, gated recurrent neural network (RHN) to model the temporal nature of speech signal. Recurrent neural networks are designed for modeling sequential data, and gated variants (GRUs, LSTMs) are widely used with speech and text in both cognitive modeling and engineering contexts. RHNs are a simple generalization of GRU networks such that the transform between time points can consist of several steps. Our multimodal model projects spoken utterances and images to a joint semantic space. The idea of projecting different modalities to a shared semantic space via a pair of encoders has been used in work on language and vision (among them Vendrov et al. (2015)). The core idea is to encourage inputs representing the same meaning in different modalities to end up nearby, while maintaining a distance from unrelated inputs. The model consists of two parts: an utterance encoder, and an image encoder. The utterance encoder starts from MFCC speech features, while the image encoder starts from features extracted with a VGG-16 pre-trained on ImageNet. Our loss function attempts to make the cosine distance between encodings of matching utterances and images greater than the distance between encodings of mismatching utterance/image pairs, by a margin: (1) X u,i X u′ max[0, α +d(u, i)−d(u′, i)] + X i′ max[0, α + d(u, i) −d(u, i′)] ! where d(u, i) is the cosine distance between the encoded utterance u and encoded image i. Here (u, i) is the matching utterance-image pair, u′ ranges over utterances not describing i and i′ ranges over images not described by u. The image encoder enci is a simple linear projection, followed by normalization to unit L2 norm: enci(i) = unit(Ai + b) (2) where unit(x) = x (xT x)0.5 and with (A, b) as learned parameters. The utterance encoder encu consists of a 1-dimensional convolutional layer of length s, size d and stride z, whose output feeds into a Recurrent Highway Network with k layers and L microsteps, whose output in turn goes through an attention-like lookback operator, and finally L2 normalization: encu(u) = unit(Attn(RHNk,L(Convs,d,z(u)))) (3) The main function of the convolutional layer Convs,d,z is to subsample the input along the temporal dimension. We use a 1-dimensional convolution with full border mode padding. The attention operator simply computes a weighted sum of the RHN activation at all timesteps: Attn(x) = X t αtxt (4) where the weights αt are determined by learned parameters U and W, and passed through the timewise softmax function: αt = exp(U tanh(Wxt)) P t′ exp(U tanh(Wxt′)) (5) The main component of the utterance encoder is a recurrent network, specifically a Recurrent Highway Network (Zilly et al., 2016). The idea behind 615 RHN is to increase the depth of the transform between timesteps, or the recurrence depth. Otherwise they are a type of gated recurrent networks. The transition from timestep t −1 to t is then defined as: rhn(xt, s(L) t−1) = s(L) t (6) where xt stands for input at time t, and s(l) t denotes the state at time t at recurrence layer l, with L being the top layer of recurrence. Furthermore, s(l) t = h(l) t ⊙t(l) t + s(l−1) t ⊙  1 −t(l) t  (7) where ⊙is elementwise multiplication, and h(l) t = tanh  I[l = 1]WHxt + UHls(l−1) t  (8) t(l) t = σ  I[l = 1]WT xt + UTls(l−1) (9) Here I is the indicator function: input is only included in the computation for the first layer of recurrence l = 1. By applying the rhn function repeatedly, an RHN layer maps a sequence of inputs to a sequence of states: (10) RHN(X, s0) = rhn(xn, . . . , rhn(x2, rhn(x1, s(L) 0 ))) Two or more RHN layers can be composed into a stack: RHN2(RHN1(X, s1 (L) 0 ), s2 (L) 0 ), (11) where sn (l) t stands for the state vector of layer n of the stack, at layer l of recurrence, at time t. In our version of the Stacked RHN architecture we use residualized layers: RHNres(X, s0) = RHN(X, s0) + X (12) This formulation tends to ease optimization in multi-layer models (cf. He et al., 2015; Oord et al., 2016). In addition to the speech model described above, we also define a comparable text model. As it takes a sequence of words as input, we replace the convolutional layer with a word embedding lookup table. We found the text model did not benefit from the use of the attention mechanism, and thus the sentence embedding is simply the L2-normalized activation vector of the topmost layer, at the last timestep. 4 Experiments Our main goal is to analyze the emerging representations from different components of the model and to examine the linguistic knowledge they encode. For this purpose, we employ a number of tasks that cover the spectrum from fully formbased to fully semantic. In Section 4.2 we assess the effectiveness of our architecture by evaluating it on the task of ranking images given an utterance. Sections 4.3 to 4.6 present our analyses. In Sections 4.3 and 4.4 we define auxiliary tasks to investigate to what extent the network encodes information about the surface form of an utterance from the speech input. In Section 4.5 and 4.6 we focus on where semantic information is encoded in the model. In the analyses, we use the following features: Utterance embeddings: the weighted sum of the unit activations on the last layer, as calculated by Equation (3). Average unit activations: hidden layer activations averaged over time and L2-normalized for each hidden layer. Average input vectors: the MFCC vectors averaged over time. We use this feature to examine how much information can be extracted from the input signal only. 4.1 Data For the experiments reported in the remainder of the paper we use two datasets of images with spoken captions. 4.1.1 Flickr8K The Flickr8k Audio Caption Corpus was constructed by having crowdsource workers read aloud the captions in the original Flickr8K corpus (Hodosh et al., 2013). For details of the data collection procedure refer to Harwath and Glass (2015). The datasets consist of 8,000 images, each image with five descriptions. One thousand images are held out for validation, and another one thousand for the final test set. We use the splits provided by (Karpathy and Fei-Fei, 2015). The image features come from the final fully connect layer of VGG-16 (Simonyan and Zisserman, 2014) pre-trained on Imagenet (Russakovsky et al., 2014). We generate the input signal as follows: we extract 12-dimensional mel-frequency cepstral coefficients (MFCC) plus log of the total energy. We 616 then compute and add first order and second order differences (deltas) for a total of 37 dimensions. We use 25 milisecond windows, sampled every 10 miliseconds.1 4.1.2 Synthetically spoken COCO We generated synthetic speech for the captions in the MS COCO dataset (Lin et al., 2014) via the Google Text-to-Speech API.2 The audio and the corresponding MFCC features are released as Chrupała et al. (2017)3. This TTS system we used produces high-quality realistic-sounding speech. It is nevertheless much simpler than real human speech as it uses a single voice, and lacks tempo variation or ambient noise. The data consists of over 300,000 images, each with five spoken captions. Five thousand images each are held out for validation and test. We use the splits and image features provided by Vendrov et al. (2015).4 The image features also come from the VGG-16 network, but are averages of feature vectors for ten crops of each image. For the MS COCO captions we extracted only plain MFCC and total energy features, and did not add deltas in order to keep the amount of computation manageable given the size of the dataset. 4.2 Image retrieval We evaluate our model on the task of ranking images given a spoken utterance, such that highly ranked images contain scenes described by the utterance. The performance on this task on validation data is also used to choose the best variant of the model architecture and to tune the hyperparameters. We compare the speech models to models trained on written sentences split into words. The best settings found for the four models were the following: Flickr8K Text RHN 300-dimensional word embeddings, 1 hidden layer with 1024 dimensions, 1 microstep, initial learning rate 0.001. Flick8K Speech RHN convolutional layer with length 6, size 64, stride 2, 4 hidden layers with 1024 dimensions, 2 microsteps, atten1We noticed that for a number of utterances the audio signal was very long: on inspection it turned out that most of these involved failure to switch off the microphone on the part of the workers, and the audio contained ambient noise or unrelated speech. We thus trucated all audio for this dataset at 10,000 miliseconds. 2Available at https://github.com/pndurette/gTTS. 3Available at https://doi.org/10.5281/zenodo.400926. 4See https://github.com/ivendrov/order-embedding. tion MLP with 128 hidden units, initial learning rate 0.0002 COCO Text RHN 300-dimensional word embeddings, 1 hidden layer with 1024 dimensions, 1 microstep, initial learning rate 0.001 COCO Speech RHN convolutional layer with length 6, size 64, stride 3, 5 hidden layers with 512 dimensions, 2 microsteps, attention MLP with 512 hidden units, initial learning rate 0.0002 All models were optimized with Adam (Kingma and Ba, 2014) with early stopping: we kept the parameters for the epoch which showed the best recall@10 on validation data. Model R@1 R@5 R@10 ˜r Speech RHN4,2 0.055 0.163 0.253 48 Spectr. CNN 0.179 Text RHN1,1 0.127 0.364 0.494 11 Table 1: Image retrieval performance on Flickr8K. R@N stands for recall at N; ˜r stands for median rank of the correct image. Model R@1 R@5 R@10 ˜r Speech RHN5,2 0.111 0.310 0.444 13 Text RHN1,1 0.169 0.421 0.565 8 Table 2: Image retrieval performance on MS COCO. R@N stands for recall at N; ˜r stands for median rank of the correct image. Table 1 shows the results for the human speech from the Flickr8K dataset. The Speech RHN model scores substantially higher than model of Harwath and Glass (2015) on the same data. However the large gap between its perfomance and the scores of the text model suggests that Flickr8K is rather small for the speech task. In Table 2 we present the results on the dataset of synthetic speech from MS COCO. Here the text model is still better, but the gap is much smaller than for Flickr8K. We attribute this to the much larger size of dataset, and to the less noisy and less variable synthetic speech. While the MS COCO text model is overall better than the speech model, there are cases where it outperforms the text model. We listed the top hundred cases where the ratio of the ranks of the correct image according to the two models was the smallest, as well as another hundred cases where it was the largest. Manual inspection did not turn 617 up any obvious patterns for the cases of text being better than speech. For the cases where speech outperformed text, two patterns stood out: (i) sentences with spelling mistakes, (ii) unusually long sentences. For example for the sentence a yellow Figure 1: Images returned for utterance a yellow and white birtd is in flight by the text (left) and speech (right) models. and white birtd is in flight the text model misses the misspelled word birtd and returns an irrelevant image, while the speech model seems robust to some degree of variation in pronunciation and returns the target image at rank 1 (see Figure 1). In an attempt to quantify this effect we counted the number of unique words with training set frequencies below 5 in the top 100 utterances with lowest and highest rank ratio: for the utterances where text was better there were 16 such words; for utterances where speech was better there were 28, among them misspellings such as streeet, scears (for skiers), contryside, scull, birtd, devise. The distribution of utterance lengths in Figure 2 confirms pattern (ii): the set of 100 sentences where speech beats text by a large margin are longer on average and there are extremely long outliers among them. One of them is the 36-wordG G G G speech text 10 20 30 40 Length better Figure 2: Length distribution for sentences where one model performs much better than the other. long utterance depicted in Figure 3, with ranks 470 and 2 for text and speech respectively. We suspect that the speech model’s attention mechanism enables it to cherry pick key fragments of such monster utterances, while the text model lacking this mechanism may struggle. Figure 3 shows the plot of the attention weights for this utterance from the speech model. 4.3 Predicting utterance length Our first auxiliary task is to predict the length of the utterance, using the features explained at the beginning of Section 4. Since the length of an utterance directly corresponds to how long it takes to articulate, we also use the number of time steps5 as a feature and expect it to provide the upper bound for our task, especially for synthetic speech. We use a Ridge Regression model for predicting utterance length using each set of features. The model is trained on 80% of the sentences in the validation set, and tested on the remaining 20%. For all features regularization penalty α = 1.0 gave the best results. Figure 4 shows the results for this task on human speech from Flickr8K and synthetic speech from COCO. With the exception of the average input vectors for Flickr8K, all features can explain a high proportion of variance in the predicted utterance length. The pattern observed for the two datasets is slightly different: due to the systematic conversion of words to synthetic speech in COCO, using the number of time steps for this dataset yields the highest R2. However, this feature is not as informative for predicting the utterance length in Flickr8K due to noise and variation in human speech, and is in fact outperformed by some of the features extracted from the model. Also, the input vectors from COCO are much more informative than Flickr8K due to larger quantity and simpler structure of the speech signal. However, in both datasets the best (non-ceiling) performance is obtained by using average unit activations from the hidden layers (layer 2 for COCO, and layers 3 and 4 for Flickr8K). These features outperform utterance embeddings, which are optimized according to the visual grounding objective of the model and most probably learn to ignore the superficial characteristics of the utterance that do not contribute to matching the corresponding image. Note that the performance on COCO plateaus after the second layer, which might suggest that form-based knowledge is learned by lower layers. Since Flickr8K is much smaller in size, the stabilising happens later in layer 3. 5This is approximately duration in milliseconds 10×stride . 618 Figure 3: Attention weight distribution for a long utterance. Figure 4: R2 values for predicting utterance length for Flickr8K and COCO. Layers 1–5 represent (normalized) average unit activation, whereas the first (#0) and last point represent average input vectors and utterance embeddings, respectively. 4.4 Predicting word presence Results from the previous experiment suggest that our model acquires information about higher level building blocks (words) in the continuous speech signal. Here we explore whether it can detect the presence or absence of individual words in an utterance. We formulate detecting a word in an utterance as a binary classification task, for which we use a multi-layer perceptron with a single hidden layer of size 1024, optimized by Adam. The input to the model is a concatenation of the feature vector representing an utterance and the one representing a target word. We again use utterance embeddings, average unit activations on each layer, and average input vectors as features, and represent each target word as a vector of MFCC features extracted from the audio signal synthetically produced for that word. For each utterance in the validation set, we randomly pick one positive and one negative target (i.e., one word that does and one that does not appear in the utterance) that is not a stop word. To balance the probability of a word being positive or negative, we use each positive target as a negative target for another utterance in the validation set. The MLP model is trained on the positive and negative examples corresponding to 80% of the utterances in the validation set of each dataset, and evaluated on the remaining 20%. Figure 5 shows the mean accuracy of the MLP on Flickr8K and COCO. All results using features extracted from the model are above chance (0.5), with the average unit activations of the hidden layers yielding the best results (0.65 for Flickr8K on layer 3, and 0.79 for COCO on layer 4). These numbers show that the speech model infers reliable information about word-level blocks from the low-level audio features it receives as input. The observed trend is similar to the previous task: average unit activations on the higher-level hidden layers are more informative for this task than the utterance embeddings, but the performance plateaus before the topmost layer. Figure 5: Mean accuracy values for predicting the presence of a word in an utterance for Flickr8K and COCO. Layers 1–5 represent the (normalized) average unit activations, whereas the first (#0) and last point represent average input vectors and utterance embeddings, respectively. 4.5 Sentence similarity Next we explore to what extent the model’s representations correspond to those of humans. We employ the Sentences Involving Compositional Knowledge (SICK) dataset (Marelli et al., 2014). SICK consists of image descriptions taken from 619 Figure 6: Pearson’s r of cosine similarities of averaged input MFCCs and COCO Speech RHN hidden layer activation vectors and embeddings of sentence pairs with relatedness scores from SICK, cosine similarity of COCO Text RHN embeddings, and edit similarity. Flickr8K and video captions from the SemEval 2012 STS MSRVideo Description data set (STS) (Agirre et al., 2012). Captions were paired at random, as well as modified to obtain semantically similar and contrasting counterparts, and the resulting pairs were rated for semantic similarity. For all sentence pairs in SICK, we generate synthetic spoken sentences and feed them to the COCO Speech RHN, and calculate the cosine similarity between the averaged MFCC input vectors, the averaged hidden layer activation vectors, and the sentence embeddings. Z-score transformation was applied before calculating the cosine similarities. We then correlate these cosine similarities with • semantic relatedness according to human ratings • cosine similarities according to z-score transformed embeddings from COCO Text RHN • edit similarities, a measure of how similar the sentences are in form, specifically, 1−normalized Levenshtein distance over character sequences Figure 6 shows a boxplot over 10,000 bootstrap samples for all correlations. We observe that (i) correlation with edit similarity initially increases, then decreases; (ii) correlation with human relatedness scores and text model embeddings increases until layer 4, but decreases for hidden layer 5. The initially increasing and then decreasing correlation with edit similarity is consistent with the findings that information about form is encoded by lower layers. The overall growing correlation with both human semantic similarity ratings and 0.00 0.25 0.50 0.75 1.00 0 2 4 6 layer RER words peaking/peeking great/grate mantle/mantel peer/pier tale/tail wit/whit weight/wait isle/aisle sight/site pic/pick sun/son wears/wares pause/paws tied/tide ware/wear sales/sails boarder/border plane/plain lapse/laps rose/rows stares/stairs seen/scene plains/planes see/sea main/mane rains/reins tea/tee stair/stare waist/waste hole/whole suite/sweet pairs/pears cole/coal sale/sail log(mincount) 4 5 6 7 Figure 7: Disambiguation performance per layer. Points #0 and #6 (connected via dotted lines) represent the input vectors and utterance embeddings, respectively. The black line shows the overall mean RER. the COCO Text RHN indicate that higher layers learn to represent semantic knowledge. We were somewhat surprised by the pattern for the correlation with human ratings and the Text model similarities which drops for layer 5. We suspect it may be caused by the model at this point in the layer hierarchy being strongly tuned to the specifics of the COCO dataset. To test this, we checked the correlations with COCO Text embeddings on validation sentences from the COCO dataset instead of SICK. These increased monotonically, in support of our conjecture. 4.6 Homonym disambiguation Next we simulate the task of distinguishing between pairs of homonyms, i.e. words with the same acoustic form but different meaning. We group the words in the union of the training and validation data of the COCO dataset by their phonetic transcription. We then pick pairs of words which have the same pronunciation but different spelling, for example suite/sweet. We impose the following conditions: (a) both forms appear more than 20 times, (b) the two forms have different meaning (i.e. they are not simply variant spellings like theater/theatre), (c) neither form is a function word, and (d) the more frequent form constitutes less than 95% of the occurrences. This 620 gives us 34 word pairs. For each pair we generate a binary classification task by taking all the utterances where either form appears, using average input vectors, utterance embeddings, and average unit activations as features. Instances for all feature sets are normalized to unit L2 norm. For each task and feature set we run stratified 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains. Figure 7 shows, for each pair, the relative error reduction of each feature set with respect to the majority baseline. There is substantial variation across word pairs, but overall the task becomes easier as the features come from higher layers in the network. Some forms can be disambiguated with very high accuracy (e.g. sale/sail, cole/coal, pairs/pears), while some others cannot be distinguished at all (peaking/peeking, great/grate, mantle/mantel). We examined the sentences containing the failing forms, and found out that almost all occurrences of peaking and mantle were misspellings of peeking and mantel, which explains the impossibility of disambiguating these cases. 5 Conclusion We present a multi-layer recurrent highway network model of language acquisition from visually grounded speech signal. Through detailed analysis we uncover how information in the input signal is transformed as it flows through the network: formal aspects of language such as word identities that not directly present in the input are discovered and encoded low in the layer hierarchy, while semantic information is most strongly expressed in the topmost layers. Going forward we would like to compare the representations learned by our model to the brain activity of people listening to speech in order to determine to what extent the patterns we found correspond to localized processing in the human cortex. This will hopefully lead to a better understanding of language learning and processing by both artificial and neural networks. Acknowledgements We would like to thank David Harwath for making the Flickr8k Audio Caption Corpus publicly available. References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. arXiv preprint arXiv:1608.04207 . Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics, volume 2, pages 385–393. Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, and Barbara Plank. 2016. Automatic description generation from images: A survey of models, datasets, and evaluation measures. arXiv preprint arXiv:1601.03896 . Grzegorz Chrupała, Akos K´ad´ar, and Afra Alishahi. 2015. Learning language through pictures. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics. Grzegorz Chrupała, Lieke Gelderloos, and Afra Alishahi. 2017. Synthetically spoken COCO. https://doi.org/10.5281/zenodo.400926. Afsaneh Fazly, Afra Alishahi, and Suzanne Stevenson. 2010. A probabilistic computational model of cross-situational word learning. Cognitive Science: A Multidisciplinary Journal 34(6):1017–1063. Michael C. Frank, Noah D. Goodman, and Joshua B. Tenenbaum. 2007. A Bayesian framework for crosssituational word-learning. In Advances in Neural Information Processing Systems. volume 20. Lieke Gelderloos and Grzegorz Chrupała. 2016. From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. David Harwath and James Glass. 2015. Deep multimodal semantic embeddings for speech and images. In IEEE Automatic Speech Recognition and Understanding Workshop. David Harwath and James R Glass. 2017. Learning word-like units from joint audio-visual analysis. arXiv preprint arXiv:1701.07481 . David Harwath, Antonio Torralba, and James Glass. 2016. Unsupervised learning of spoken language with visual context. In Advances in Neural Information Processing Systems. pages 1858–1866. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. arXiv:1512.03385 . 621 Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research 47:853–899. ´Akos K´ad´ar, Grzegorz Chrupała, and Afra Alishahi. 2016. Representation of linguistic form and function in recurrent neural networks. CoRR abs/1602.08952. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 3128–3137. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. Angeliki Lazaridou, Grzegorz Chrupała, Raquel Fern´andez, and Marco Baroni. 2016. Multimodal semantic learning from child-directed input. In The 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220 . Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014, Springer, pages 740–755. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A sick cure for the evaluation of compositional distributional semantic models. In LREC. pages 216–223. Yajie Miao, Jinyu Li, Yongqiang Wang, Shi-Xiong Zhang, and Yifan Gong. 2016. Simplifying long short-term memory acoustic models for fast training and decoding. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pages 2284–2288. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759 . Deb K Roy and Alex P Pentland. 2002. Learning words from sights and sounds: a computational model. Cognitive Science 26(1):113 – 146. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2014. ImageNet Large Scale Visual Recognition Challenge. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556. Jeffrey M. Siskind. 1996. A computational study of cross-situational techniques for learning word-tomeaning mappings. Cognition 61(1-2):39–91. Gabriel Synnaeve, Maarten Versteegh, and Emmanuel Dupoux. 2014. Learning words from images and speech. In NIPS Workshop on Learning Semantics, Montreal, Canada. Zhiyuan Tang, Ying Shi, Dong Wang, Yang Feng, and Shiyue Zhang. 2016. Memory visualization for gated recurrent neural networks in speech recognition. arXiv preprint arXiv:1609.08789 . Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2015. Order-embeddings of images and language. arXiv preprint arXiv:1511.06361 . Chen Yu and Dana H Ballard. 2004. A multimodal learning interface for grounding spoken language in sensory perceptions. ACM Transactions on Applied Perception (TAP) 1(1):57–80. Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. 2014. Learning deep features for scene recognition using places database. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, Curran Associates, Inc., pages 487–495. Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. 2016. Recurrent highway networks. arXiv preprint arXiv:1607.03474 . 622
2017
57
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 623–633 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1058 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 623–633 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1058 Spectral Analysis of Information Density in Dialogue Predicts Collaborative Task Performance Yang Xu and David Reitter College of Information Sciences and Technology The Pennsylvania State University University Park, PA 16802, USA [email protected], [email protected] Abstract We propose a perspective on dialogue that focuses on relative information contributions of conversation partners as a key to successful communication. We predict the success of collaborative task in English and Danish corpora of task-oriented dialogue. Two features are extracted from the frequency domain representations of the lexical entropy series of each interlocutor, power spectrum overlap (PSO) and relative phase (RP). We find that PSO is a negative predictor of task success, while RP is a positive one. An SVM with these features significantly improved on previous task success prediction models. Our findings suggest that the strategic distribution of information density between interlocutors is relevant to task success. 1 Introduction What factors affect whether information is conveyed effectively and reliably in conversations? Several theoretical frameworks have emerged that model dialogical behavior at different granularity levels. Can we use them to measure communicative effectiveness? Grounding theory (Clark and Brennan, 1991) models a successful communication as a process during which “common ground” (i.e., mutual knowledge, beliefs etc.) is jointly built among interlocutors. The interactive alignment model (IAM) (Pickering and Garrod, 2004) proposes that the ultimate goal of dialogue is the alignment of interlocutors’ situational model, which is helped by alignment at all other lower representation levels (e.g., lexical, syntactic etc.), driven by the psychologically well-documented priming effects. Recently, empirical studies have verified the explanatory powers of the above-mentioned theories, especially the IAM, utilizing dialogues recorded and transcribed from various collaborative tasks conducted in laboratory settings (Reitter and Moore, 2007; Reitter and Moore, 2014; Fusaroli et al., 2012; Fusaroli and Tyl´en, 2016). In those studies, the quality of communication is directly reflected in the collaborative performance of interlocutors, i.e., how successful they are in accomplishing the task. Although they do not come to fully agree on which theoretical accounts of dialogue (e.g., interactive alignment vs. interpersonal synergy) provides better explanations (see Section 2.1 for details), the majority of these studies have confirmed that the alignment of certain linguistic markers, lexical items, or syntactic rules between interlocutors correlates with task success. What is missing from the picture, however, is the computational understanding of how strategies of interaction and the mix of information contributions to the conversation facilitate successful communication. This is understandable because those higher level concepts do not directly map onto the atomic linguistic elements and thus are much more difficult to define and operationalize. In the present study, we intend to explore this missing part of work by characterizing how the interaction between interlocutors in terms of their information contributions affects the quality of communication. 1.1 An information-based approach Recent work has already used information theory to study the dynamics of dialogue. Xu and Reitter (2016b) observed that the amount of lexical information (measured by entropy) from interlocutors of different roles, converges within the span of topic episodes in natural spoken dialogue. Anon (2017) interpret this converging pattern as a re623 flection of the dynamic process in which the information contributed by two interlocutors fluctuates in a complementary way at the early stage, and gradually reaches an equilibrium status. Xu and Reitter (2016b) also correlated this entropy converging pattern with the topic shift phenomenon that frequently occurs in natural conversation (Ng and Bradac, 1993), and proposed that it reflects the process of interlocutors building the common ground that is necessary for the ongoing topics of conversation. Based on Xu and Reitter’s (2016) finding that entropy converging pattern repeatedly occurs within dialogue (though not necessarily at strictly regular intervals), it is reasonable to expect that after applying some spectral analysis techniques (time space to frequency space conversion) to the entropy series of dialogue, the frequency space representations should demonstrate some patterns that are distinct from white noise, because the periodicity properties in time space are captured. Furthermore, we expect that how the frequency representations of two interlocutors correlate provides some information about the higher level properties of dialogue, e.g., the task performance etc. The thought is intuitive: If we imagine the entropy series from two interlocutors as two ideal sinusoidal signals s1 and s2 (supposedly of different frequencies, f1 and f2) (Figure 1), then the observed converging pattern can be thought of as a segment from the full spans of the signals. Then the frequency space properties, such as how close f1 and f2 are, and the phase difference φ between them, will definitely affect the shape of the converging pattern (solid lines in Figure 1). As Xu and Reitter (2016b) argues that the converging segment reflects the grounding process between interlocutors, it is reasonable to expect that the shape and length of this segment are reflective of how well interlocutors understand each other, and the overall collaborative performance as well. Based on the above considerations, the goal of the present study is to explore how the frequency space representations of the entropy series of dialogue are correlated with the collaborative performance of task. We first demonstrate that entropy series satisfy the prerequisites of spectral analysis techniques in Section 4. Then we use two frequency space statistics, power spectrum overlap (PSO) and relative phase (RP), to predict task success. The reasons of using these two specific inφ Time Entropy Signal s1 s2 Figure 1: Analogizing the entropy converging patterns reported by Xu and Reitter (2016b) to a segment from two periodic signals. The shadowed area and the solid lines indicate the observed entropy convergence between interlocutors. The dashed lines are the imaginary parts of the ideal signals. dices are discussed in Section 2.3, and their definitions are given in Section 3.3. The results are shown in Sections 5 to 7, and the implications are discussed. 2 Related Work 2.1 The success of dialogue The interactive-alignment model (IAM) (Pickering and Garrod, 2004) stipulates that communication is successful to the extent that communicators “understand relevant aspects of the world in the same way as each other” (Garrod and Pickering, 2009). Qualitative and quantitative studies (Garrod and A. Anderson, 1987; Pickering and Garrod, 2006; Reitter and Moore, 2014) have revealed that the alignment of linguistic elements at different representation levels between interlocutors facilitates the success of task-oriented dialogues. More recently, different theoretical accounts other than IAM, such as interpersonal synergy (Fusaroli et al., 2014) and complexity matching (Abney et al., 2014) have been proposed to explain the mechanism of successful dialogue from the perspective of dynamic systems. Fusaroli and Tyl´en (2016) compare the approaches of interactive alignment and interpersonal synergy in terms of how well they predict the collective performance in a joint task. They find that the synergy approach is a better predictor than the alignment approach. Abney et al. (2014) differentiate the concepts of behavior matching and complexity matching in dyadic interaction. They demonstrate the acoustic onset events in speech signals exhibit power law clustering across timescales, and the 624 complexity matching in these power law functions is reflective of whether the conversation is affiliative or argumentative. The perspective taken by the present study has some common places with Fusaroli and Tyl´en (2016) and Abney et al.’s (2014) work: we view dialogue as an interaction of two dynamic systems. The joint decision-making task used by Fusaroli and Tyl´en (2016) resulted in a small corpus of dialogue in Danish, which we will use for the present study. 2.2 Information density in natural language Information Theory (Shannon, 1948) predicts that the optimal way to communicate is to send information at a constant rate, a.k.a. the principle of entropy rate constancy (ERC). The way humans use natural language to communicate also follows this principle: by computing the local per-word entropy of the sentence (which, under the prediction of ERC, will increase with sentence position), ERC is confirmed in both written text (Genzel and Charniak, 2002; Genzel and Charniak, 2003; Keller, 2004; Qian and Jaeger, 2011) and spoken dialogue (Xu and Reitter, 2016b; Xu and Reitter, 2016a). The theory of uniform information density (UID) extends ERC to syntactic representations (Jaeger, 2010) and beyond. The information density in language, i.e., the distribution of entropy (predictability), reveal the discourse structure to some extent. For example, entropy drops at the boundaries between topics (Genzel and Charniak, 2003; Qian and Jaeger, 2011), and increases within a topic episode in dialogue (Xu and Reitter, 2016b) (see Section 1.1). The entropy of microblog text reflects changes in contextual information (e.g., an unexpected event in a sports game) (Doyle and Frank, 2015). In sum, per-word entropy quantifies the amount of lexical information in natural language, and therefore fulfills the needs of modeling the information contribution from interlocutors. 2.3 Spectral analysis methodology Spectral analysis, also referred to as frequency domain analysis, is a pervasively used technique in physics, engineering, economics and social sciences. The key idea of it is to decompose a complex signal in time space into simpler components in frequency space, using mathematical operations such as Fourier transform (Bracewell, 1986). The application of spectral analysis in human language technology mainly focuses on processing the acoustic signals of human voice, and capturing the para-linguistics features relevant to certain tasks (Schuller et al., 2013). For example, Bitouk et al. (2010) find that utterance-level spectral features are useful for emotion recognition. Gregory Jr and Gallagher (2002) demonstrate that spectral information beneath 0.5 kHz can predict US president election outcomes. However, we are not aware of the usage of spectral analysis in studying linguistic phenomena at higher representation levels than the acoustic level. For our study, we are looking for some techniques that can capture the coupling between two signals at frequency space. The nature of the signal (whether it is language-related or not) should not be the first concern from the perspective of methodology. Therefore, studies outside the field of speech communication and linguistics could also be enlightening to our work. After searching the literature, we find that the spectral analysis techniques that Oullier et al. (2002) and Oullier et al. (2008) use to study the physical and social functions of human body movement are useful to our research goal. In Oullier et al.’s (2002) work, subjects stood in a moving room and were to track a target attached to the wall. A frequency space statistics, power spectrum overlap (PSO), was used to demonstrate the coupling between motion of the room and motion of the subject’s head. Stronger coupling effect (higher PSO) was found in the tracking task than a no-tracking baseline. PSO in nature quantifies how much the frequency space representations of two signals (power spectrum density) overlap. It allows us to explore the frequency space coupling of two interlocutors’ entropy series in dialogue. Similarly, Oullier et al. (2008) used the metrics of peak-to-peak relative phase (RP) and PSO to study the spontaneous synchrony in behavior that emerges between interactants as a result of information exchange. The signals to be analyzed were the flexion-extension movement of index fingers of two subjects sitting in front of each other. Both metrics showed different patterns when the participants see each other or not. RP, in their work, measures the magnitude of delay between two signals, and it corresponds to the notion of φ in Section 1.1. 625 3 Methods 3.1 Corpus data Two corpora are examined in this study: the HCRC Map Task Corpus (A. H. Anderson et al., 1991) and a smaller corpus in Danish from a joint decision-making study (Fusaroli et al., 2012), henceforth DJD. Map Task contains a set of 128 dialogues between two subjects, who accomplished a cooperative task together. They were given two slightly different maps of imaginary landmarks. One of them plays as the instruction giver, who has routes marked on her map, and the other plays as the instruction follower, who does not have routes. The task for them is to reproduce the giver’s route on the follower’s map. The participants are free to speak, but they cannot see each other’s map. The whole conversations were recorded, transcribed and properly annotated. The collaborative performance in the task is measured by the PATHDEV variable, which quantifies the deviation between the paths drawn by interlocutors. Larger values indicate poorer task performance. DJD contains a set of 16 dialogues from native speakers of Danish (11,100 utterances and 56,600 words). In Fusaroli et al.’s (2012) original study the participants were to accomplish a series of visual perception task trials, by discussing the stimuli they saw and reaching a joint decision for each trial. The collaborative performance is measured by the CollectivePerformance variable, which is based on a psychometric function that measures the sensitivity of the dyad’s joint decision to the actual contrast difference of the trial (Fusaroli et al., 2012). Higher value of this variable indicates better task performance. The Switchboard Corpus (Godfrey et al., 1992) is used to train the language model for estimating the sentence entropy in Map Task. The Copenhagen Dependency Treebanks Corpus1 is used for the same purpose for DJD. 3.2 Estimating information density in dialogue The information density of language is estimated at the sentence level, by computing the per-word entropy of each sentence using a trigram language model trained from a different corpus. We consider a sentence to be a sequence of words, S = 1http://mbkromann.github.io/ copenhagen-dependency-treebank/ {w1, w2, . . . , wn}, and the per-word entropy is estimated by: H(w1...wn) = −1 n X wi∈W log P(wi|w1...wi−1) (1) where P(wi|w1 . . . wi−1) is estimated by a trigram model that is trained from an outside corpus. The SRILM software (Stolcke, 2002) is used to train the language model and to compute sentence entropy. Dialogue is a sequence of utterances contributed by two interlocutors. For the k th dialogue whose total utterance number is Nk, we mark it as Dk = {uk i | i = 1, 2, . . . , Nk}, in which uk i is the i th utterance. Map Task contains annotations of sentence structure in utterances, and one utterance could consist of several sentences that are syntactically independent. Thus we further split Dk into a sequence of sentence, Dk = {sk i | i = 1, 2, . . . , N′ k}, in which N′ k is number of sentences in Dk. Since DJD lacks the sentence annotations, we do not further split the utterance sequence, and simply treat an utterance as a complete sentence. Given a sequence {sk i } (Map Task), or {uk i } (DJD), we calculate the per-word entropy for each item in the sequence: Hk = {H(sk i ) or H(uk i ) | i = 1, 2, . . . , N′ k(orNk)} (2) where H(sk i ) or H(uk i ) is computed according to Equation 1. Then we split the entropy series Hk into two sub-series by the source of utterances (i.e., who speaks them), resulting in HA k for interlocutor A, and HB k for interlocutor B. For Map Task, the two interlocutors have distinct roles, instruction giver and follower. Thus the resulting two entropy series are Hg k and Hf k . These per-interlocutor entropy series will be the input of our next-step spectral analysis. 3.3 Computing power spectrum overlap and relative phase The time intervals between utterances (or sentences) vary, but since we care about the average information contribution within a complete semantic unit, we treat entropy series as regular time series. The time scale is not measured in seconds but in turns (or sentences). For a given dialogue Dk, we apply the fast Fourier transform (FFT) on its two entropy se626 ries HA k and HB k , and obtain the power spectra (or, power spectral density plots) of them, P A k and P B k . The power spectra are estimated with the periodogram method provided by the open source R software. The Y axis of a power spectrum is the squared amplitude of signal (or power), and X axis ranges from 0 to π/2 (we do not have sampling frequency, thus the X axis is in angular frequency but not in Hz). The power spectrum overlap, PSOk, is calculated by computing the common area under the curves of P A k and P B k is calculated, and normalizing by the total area of the two curves (see Figure 2). PSOk ranges from 0 to 1, and a larger value indicates higher similarity between P A k and P B k . Common area 0 2 4 6 8 0.0 0.1 0.2 0.3 0.4 0.5 Frequency Power Spectrum Pk A Pk B Figure 2: How PSO is computed. The blue shadow is the common area under two spectrums. The relative phase (RP) between HA k and HB k is directly returned by the spectrum function in R. It is a vector of real numbers that range from 0 to π, and each element represent the phase difference between two signals at a particular frequency position of the spectrum. 4 Prerequisites of Spectral Analysis Before proceeding to the actual analysis, we first examine whether the data we use satisfy some of the prerequisites of spectral analysis techniques. One common assumption of Fourier transforms is that the signals (time series) are stationary (Dwivedi and Subba Rao, 2011). Stationarity means that the mean, variance and other distributional properties do not change over time (Natrella, 2010). Another presumption we hold is that the entropy series contain some periodic patterns (see Section 1.1), which means their power spectrum should differ from that of white noise. 4.1 Examine stationarity We use three pervasively used statistical tests to test the stationarity of our entropy series data: the Table 1: Percentage stationary data Corpus ADF KPSS PP Map Task 82.4% 95.5% 100% DJD 100% 81.3% 100% augmented Dickey-Fuller (ADF) test (Dickey and Fuller, 1979), the Kwiatkowski-Phillips-SchmidtShin (KPSS) test (Kwiatkowski et al., 1992), and the Phillips-Perron (PP) test (Phillips and Perron, 1988). The percentage of entropy series that pass the stationarity tests are shown in Table 1. We can see that the majority of our data satisfy the assumption of stationarity, and thus it is valid to conduct Fourier transform on the entropy series. The stationarity property seems contradictory to the previous findings about entropy increase in written text and spoken dialogue (Genzel and Charniak, 2002; Genzel and Charniak, 2003; Xu and Reitter, 2016b), because stationarity predicts that the mean entropy stays constant over time. We examine this in our data by fitting a simple linear model with entropy as the dependent, and sentence position as the independent variable, which yields significant (marginal) effects of the latter: For Map Task, β = 2.3 × 10−3, p < .05, Adj-R2 = 1.7 × 10−4; For DJD, β = 7.2 × 10−5, p = .06, Adj-R2 = 2.2 × 10−4. It indicates that the stationarity of entropy series does not conflict with the entropy increasing trend predicted by the principle of ERC (Shannon, 1948). We conjecture that stationarity satisfies because the effect size (Adj-R2) of entropy increase is very small. 4.2 Comparison with white noise Power spectra for all entropy series are obtained with an FFT. We compare them with those of white noise. The white noise data are simulated with i.i.d. random data points that are generated from normal distributions (same means and standard deviations as the actual data). Figure 3 shows the smoothed average spectrums of the actual entropy data and the simulated white noise data. White noise signals should demonstrate a constant power spectral density (Narasimhan and Veena, 2005), and if the entropy series is not completely random, then their average spectrum should be flat. Linear models show that the average spectrums of the entropy data have slopes that are significantly larger than zero (For Map Task, 627 200 250 300 350 0.0 0.1 0.2 0.3 0.4 0.5 Frequency Power Type Actual data White noise (a) Map Task 0.29 0.30 0.31 0.32 0.0 0.1 0.2 0.3 0.4 0.5 Frequency Power Type Actual data White noise (b) DJD Figure 3: Comparing the average power spectra of the actual entropy data and white noise. There are significant linear correlations between power (Y axis) and frequency (X axis) for the actual entropy data, which means the data are not completely random. Shadowed areas are 95% C.I. β = 2.3 × 10−2, SE = 9.4 × 10−3, p < .05; for DJD, β = 314.1, SE = 19.8, p < .001), while the slopes of the white noise data are not significantly different from zero. This confirms our presumption that the entropy series of dialogue contains some periodic patterns that are identifiable in frequency space. We also conduct Ljung-Box test (Ljung and Box, 1978) to examine how the entropy series is different from white noise. The null hypothesis is that the time series being tested is independent of the lagged sequence of itself. The test on a white noise series will give big p-values, for any lags greater than 0, because of its randomness nature. We try several lags on each entropy series, and pick the smallest p-value. Consequently, we obtain a mean p-value of .23 on MapTask, and a mean p-value of .27 on DJD. Therefore, we cannot reject the null hypothesis for all the entropy series data, but the Type-I error of considering them as different form white noise is pretty low. 5 PSO Predicts Task Success 5.1 Results of linear models We compute PSO for all the dialogues in Map Task and DJD and fit two linear models using PSO as predictor, with PATHDEV and CollectivePerformance as dependent variables respectively. PSO is a reliable predictor in both models (p < .05). The coefficients are shown in Table 2. Since PATHDEV is a measure of failure, but collaborative task performance is a measure of success, the negative correlation between PSO and collaborative task performance is consistent. Regression lines with residuals are plotted in Figure 4. Table 2: Coefficients of PSO in predicting PATHDEV (Map Task) and CollectivePerformance (DJD). * indicates p < .05. Dependent β SE F Adj-R2 PATHDEV 124.8 49.4 6.39* .045 CollectivePerformance -40.9 15.9 6.60* .271 Figure 4 (a) suggests a heteroscedasticity problem, because the right half of data points seem to stretch up along the y axis. This was confirmed by a Breush-Pagan test (Breusch and Pagan, 1979) (BP = 5.62, p < .05). To rectify this issue, we adopt a Box-Cox transformation (Box and Cox, 1964) on the dependent variable, PATHDEV, which is a typical way of handling heteroscedasticity. The new model that uses PSO to predict the Box-Cox transformed PATHDEV also yields significant coefficients: β = 3.85, SE = 1.67, F(1, 113) = 5.32, p < .05. Therefore, the correlation between PSO and PATHDEV is reliable. As for DJD, due to the lack of data (we only have 16 dialogues), we do not run further diagnostics analysis on the regression model. 5.2 Discussion The coupling of entropy series in frequency space is negatively correlated with task success. In other words, synchrony between interlocutors in terms of their information distribution hinders the success of collaboration. By “synchrony”, we mean an overlap in the frequencies at which they choose to inject novel information into the conversation. This conclusion seems contradictory to the perspective of interactive alignment at the first glance. However, here we are starting with a very highlevel model of dialogue that has does not refer to linguistic devices. Instead, we utilize the concept of “information density” and the entropy metric of natural language, to paint the picture of a system in which communicators inject novelty into the dialogue, and that each communicator does so regularly and with a set of overlapping frequencies. We assume that the rapid change of sentence entropy, i.e., the high frequency components in the spectrum, correspond to the moments in conversa628 G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 50 100 150 200 0.1 0.2 0.3 0.4 0.5 PSO PATHDEV (a) Map Task G G G G G G G G G G G G G G G G 3 4 5 6 7 0.275 0.300 0.325 0.350 0.375 PSO CollectivePerformance (b) DJD Figure 4: Regression lines of linear models using PSO to predict PATHDEV in Map Task (a) and CollectivePerformance in DJD (b). Shadowed areas are 95% C.I. tion where one interlocutor brings relatively novel content to the table, such as a detailed instruction, a strange question, an unexpected response etc. This assumption is reasonable because previous work has shown that sudden change in entropy predicts topic change in dialogue (Genzel and Charniak, 2003; Qian and Jaeger, 2011; Xu and Reitter, 2016b). We argue that higher synchrony (larger overlap in frequency space) in terms of how much novelty each interlocutor contributes, does not necessarily leads to better outcomes of communication. Rather, we would expect the correlation to be opposite (and our empirical results confirm this), because dialogue is a joint activity, in which a taking on different roles as interlocutors (e.g., the one who gives orders versus the one who follows) is often required to push the activity along (Clark, 1996). A dialogue with maximal synchrony or frequency overlap would be one where partners take turns at regular intervals. Perhaps because such regularity in turn-taking assigns no special roles to interlocutors, and because they engage in turntaking with no regard for content, it is not strange that such synchrony is disadvantageous. Let’s look at several scenarios of different synchrony levels between interlocutors: First, high synchrony due to both interlocutors contributing large amount of new information, which means there is more overlap near the high frequency band of spectrums. In this case, they are more likely to have difficulty in comprehending each other due to the potential information overload. Situations such as arguing, or both speakers asking a lot of questions are good examples. Second, high synchrony due to both interlocutors providing ineffective information, which indicates overlap in spectrums near the low frequency band. Obviously this type of ineffective communication is not helpful to the collaborative task. Third, low synchrony due to one interlocutor providing more information and the other one providing less, which means the overlap in spectrums is minimum. An example of this case is that one interlocutor is saying something important, while the other one is producing short utterances such “uh-huh”, “yes”, or short questions to make sure that they are on the same page, which is known as the back-channel mechanism in conversation (Orestr¨om, 1983). This complementary style of communication allows them to build mutual understand of each other’s intention, and thus reaches better collaborative performance. 6 RP Predicts Task Success 6.1 Results of linear models We obtain the relative phase (RP) vector (absolute values) of all frequency components, and fit linear models using the mean of RP as predictor, and task performance as the dependent variable. We get non-significant coefficients for both models: For Map Task, F(1, 113) = .004, p > .05; for DJD, F(1, 14) = .772, p > .05. This suggests that the phase information of all frequency components in spectrum is not very indicative of task success. 629 The power spectra describe the distribution of energy across the span of frequency components that compose the signal. The frequency components with higher energy (peaks in spectrum) are more dominant than those with lower energy (troughs) in determining the nature of the signal. Therefore it makes sense to only include the peak frequencies into the model, because they are more “representative” of the signal, and so the “noise” from the low energy frequencies are filtered out. Thus we obtain RP from the local peak frequency components, and use the mean, median, and maximum values of them as predictors. It turns out that for Map Task, the maximum of RP is a significant predictor (the mean and median are left out via stepwise analysis). For DJD, the mean of RP is a significant predictor of task success (when median and maximum are included in the model). (see Table 3). Table 3: Coefficients of the linear models using the mean, median, and maximum values of RP from peak frequency components to predict task performance. ∗p < .05, † p < .1. Corpus Predictor β SE t score Map Task max -64.9 30.3 -2.14* DJD mean 15.6 5.7 2.76* median -7.4 3.6 -2.06† max -11.5 7.2 -1.60 From the significant effect of maximum RP in Map Task and mean RP in DJD, it is safe to state that RP is positively correlated with task performance. However, this relationship is not as straight-forward as PSO, because of the marginal effect at the opposite direction. A more finegrained analysis is required, but it is outside the scope of this study. 6.2 Discussion The relative phase in frequency space can be understood as the “lag” between signals in time space. Imagine that we align the two entropy series from one dialogue onto the same time scale (just like Figure 1), the distance between the entropy “peaks” is proportionate to the relative phase in frequency space. Then, the positive correlation between relative phase and task performance suggests that relatively large delays between entropy Table 4: R2 performance on the HCRC MapTask task success prediction task (percentage of variance explained). 10-fold cross-validated by dialogue; same folds for each model. Reitter and Moore (2007) (R&M) contained length and lexical and syntactic repetition features. Model R2 R&M .17 R&M LENGTH only .09 R&M LENGTH only (C=.5) .1260 R&M (C=.5) .1771 R&M + PSO + RP .2826 R&M + PSO*RP .2435 R&M LENGTH only + PSO*RP .2494 “surges” seen in each interlocutor are beneficial to collaborative performance. The delay of entropy surges can be understood as a strategy for an interlocutor to distribute information in his or her own utterance accordingly with the information received. For example, after interlocutor A contributes a big piece of information, the other one, B, does not rush to make new substantial contributions, but instead keeps her utterances at low entropy until it is the proper time to take a turn to contribute. This does not have to coincide with dialogic turn-taking. This delay gives B more time to “digest” the information provided by A, which could be an instruction that needs to be comprehended, or a question that needs to be thought about and so on. A relatively long delay guarantees enough time for interlocutors to reach mutual understanding. On the contrary, if B rushes to speak a lot shortly after the A’s input, then it will probably cause information overload and be harmful to communication. Therefore, we believe that the RP statistic captures the extent to which interlocutors manage the proper “timing” of information contribution to maintain effective communication. 7 Prediction Task Here we explore whether the frequency domain features, PSO and RP, can help with an existing framework that utilizes alignment features, such as the repetition of lexical and syntactic elements, to predict the success of dialogue in MapTask (Reitter and Moore, 2007). 630 R&M described an SVM model that takes into the repetition count of lexicons (LEXREP) and syntax structures (SYNREP), and the length of dialogues (LENGTH) as features. The full model achieves an R2 score of .17, which means that it can account for 17% of the variance of task success. We add the new PSO and RP (mean, median and maximum RP features per dialogue are included) covariates to the original SVM model. An RBF kernel (γ = 5) was used. The cost parameter C was (coarsely) tuned on different cross-validation folds to reduce overfitting on this relatively small dataset, and the R&M’s original full model was recalculated (shown in Table 4 as R&M). Two models with PSO and RP interactions (once without the alignment/repetition features) are shown for comparison. (See Table 4). Significant improvement in the model’s explanatory power, i.e., R2, is gained after the PSO and RP features are added. The best model we have is by adding PSO and RP as predictors without the interaction term (bold number in Table 4), which gives about 60% increase of R2 compared to R&M’s full model. Note that even if we exclude the alignment features, and include only (LENGTH) and the frequency features (last row in Table 4), the performance also exceeds R&M’s full model. The results indicate that the frequency domain features (PSO and RP) of the sentence information density can capture some hidden factors of task success that are unexplained by the alignment approach. It is not surprising that how people coordinate their information contribution matters a lot to the success of the collaboration. What we show here is that regular, repeated patterns of information-dense and information-sparse turns seem to make speakers more or less compatible with each other. Whether individuals have typical patterns (frequency distributions) of information density, or whether this is a result of dynamic interaction in each particular dialogue, remains to be seen. 8 Conclusions The empirical results of the present study suggest that examining how the information contribution from interlocutors co-develops can provide a way to understand dialogue from a higher-level perspective, which has been missing in existing work. Our work adds a brick to the series of endeavors on studying the linguistic and behavioral factors of successful dialogue, and for the first time (as far as we know) demonstrates quantitatively that the dynamics of not just “what” and “how” we say, but also “how much” we say and the “timing” of distributing what we say in dialogue, are relevant to the quality of communication. Although the way we model information in language is simply the entropy at lexical level, we believe the findings still reveal the nature of information production and processing in dialogue. We hope that by comparing and combining our methodology with other approaches of studying dialogue, we can reach a more comprehensive and holistic understanding of this common yet mysterious human practice. Acknowledgments We thank Riccardo Fusaroli for providing the DJD dataset. We have received very helpful input from Gesang Zeren in developing the initial ideas of this project. The work leading to this paper was funded by the National Science Foundation (IIS-1459300 and BCS-1457992). References Abney, D. H., Paxton, A., Dale, R., & Kello, C. T. (2014). Complexity matching in dyadic conversation. Journal of Experimental Psychology: General, 143(6), 2304. Anderson, A. H., Bader, M., Bard, E. G., Boyle, E., Doherty, G., Garrod, S., ... Miller, J. et al. (1991). The HCRC map task corpus. Language and Speech, 34(4), 351–366. Bitouk, D., Verma, R., & Nenkova, A. (2010). Class-level spectral features for emotion recognition. Speech Communication, 52(7), 613–625. Box, G. E. & Cox, D. R. (1964). An analysis of transformations. Journal of the Royal Statistical Society. Series B (Methodological), 211–252. Bracewell, R. N. (1986). The Fourier transform and its applications. New York: McGrawHill. Breusch, T. S. & Pagan, A. R. (1979). A simple test for heteroscedasticity and random coefficient variation. Econometrica: Journal of the Econometric Society, 1287–1294. Clark, H. H. (1996). Using language. Cambridge University Press. 631 Clark, H. H. & Brennan, S. E. (1991). Grounding in communication. Perspectives on Socially Shared Cognition, 13(1991), 127–149. Dickey, D. A. & Fuller, W. A. (1979). Distribution of the estimators for autoregressive time series with a unit root. Journal of the American Statistical Association, 74(366a), 427– 431. Doyle, G. & Frank, M. C. (2015). Shared common ground influences information density in microblog texts. In Proceedings of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (naacl-hlt). Denver, DO. Dwivedi, Y. & Subba Rao, S. (2011). A test for second-order stationarity of a time series based on the discrete Fourier transform. Journal of Time Series Analysis, 32(1), 68– 91. Fusaroli, R., Bahrami, B., Olsen, K., Roepstorff, A., Rees, G., Frith, C., & Tyl´en, K. (2012). Coming to terms quantifying the benefits of linguistic coordination. Psychological Science, 23(8), 931–939. Fusaroli, R., Raczaszek-Leonardi, J., & Tyl´en, K. (2014). Dialog as interpersonal synergy. New Ideas in Psychology, 32, 147–157. Fusaroli, R. & Tyl´en, K. (2016). Investigating conversational dynamics: interactive alignment, interpersonal synergy, and collective task performance. Cognitive Science, 40(1), 145–171. Garrod, S. & Anderson, A. (1987). Saying what you mean in dialogue: a study in conceptual and semantic co-ordination. Cognition, 27(2), 181–218. Garrod, S. & Pickering, M. J. (2009). Joint action, interactive alignment, and dialog. Topics in Cognitive Science, 1(2), 292–304. Genzel, D. & Charniak, E. (2002). Entropy rate constancy in text. In Proc. 40th Annual Meeting on Association for Computational Linguistics (pp. 199–206). Philadelphia, PA. Genzel, D. & Charniak, E. (2003). Variation of entropy and parse trees of sentences as a function of the sentence number. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing (pp. 65–72). Association for Computational Linguistics. Godfrey, J. J., Holliman, E. C., & McDaniel, J. (1992). Switchboard: telephone speech corpus for research and development. In International Conference on Acoustics, Speech, and Signal Processing (Vol. 1, pp. 517– 520). IEEE. San Francisco, CA. Gregory Jr, S. W. & Gallagher, T. J. (2002). Spectral analysis of candidates’ nonverbal vocal communication: predicting us presidential election outcomes. Social Psychology Quarterly, 298–308. Jaeger, T. F. (2010). Redundancy and reduction: speakers manage syntactic information density. Cognitive Psychology, 61(1), 23–62. Keller, F. (2004). The entropy rate principle as a predictor of processing effort: an evaluation against eye-tracking data. In Proc. conference on Empirical Methods in Natural Language Processing (pp. 317–324). Barcelona, Spain. Kwiatkowski, D., Phillips, P. C., Schmidt, P., & Shin, Y. (1992). Testing the null hypothesis of stationarity against the alternative of a unit root: how sure are we that economic time series have a unit root? Journal of Econometrics, 54(1-3), 159–178. Ljung, G. M. & Box, G. E. (1978). On a measure of lack of fit in time series models. Biometrika, 297–303. Narasimhan, S. & Veena, S. (2005). Signal processing: principles and implementation. Alpha Science Int’l Ltd. Natrella, M. (2010). Nist/sematech e-handbook of statistical methods. NIST/SEMATECH. Ng, S. H. & Bradac, J. J. (1993). Power in language: Verbal communication and social influence. Sage. Orestr¨om, B. (1983). Turn-taking in english conversation. Lund: CWK Gleerup. Oullier, O., Bardy, B. G., Stoffregen, T. A., & Bootsma, R. J. (2002). Postural coordination in looking and tracking tasks. Human Movement Science, 21(2), 147–167. Oullier, O., De Guzman, G. C., Jantzen, K. J., Lagarde, J., & Kelso, S. J. (2008). Social coordination dynamics: measuring human bonding. Social Neuroscience, 3(2), 178–192. Phillips, P. C. & Perron, P. (1988). Testing for a unit root in time series regression. Biometrika, 335–346. 632 Pickering, M. J. & Garrod, S. (2004). Toward a mechanistic psychology of dialogue. Behavioral and Brain Sciences, 27(02), 169–190. Pickering, M. J. & Garrod, S. (2006). Alignment as the basis for successful communication. Research on Language and Computation, 4(2-3), 203–228. Qian, T. & Jaeger, T. F. (2011). Topic shift in efficient discourse production. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 3313–3318). Reitter, D. & Moore, J. D. (2007). Predicting success in dialogue. In Proc. 45th Annual Meeting of the Association of Computational Linguistics (pp. 808–815). Prague, Czech Republic. Reitter, D. & Moore, J. D. (2014). Alignment and task success in spoken dialogue. Journal of Memory and Language, 76, 29–46. Schuller, B., Steidl, S., Batliner, A., Burkhardt, F., Devillers, L., M¨uller, C., & Narayanan, S. (2013). Paralinguistics in speech and language-state-of-the-art and the challenge. Computer Speech and Language, 27(1), 4– 39. Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27, 379–423. Stolcke, A. (2002). SRILM - an extensible language modeling toolkit. In The 7th International Conference on Spoken Language Processing. Denver, Colorado. Xu, Y. & Reitter, D. (2016a). Convergence of syntactic complexity in conversation. In Proc. 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (pp. 443–448). Berlin, Germany. Xu, Y. & Reitter, D. (2016b, August). Entropy Converges Between Dialogue Participants: Explanations from an InformationTheoretic Perspective. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 537–546). Berlin, Germany: Association for Computational Linguistics. 633
2017
58
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 634–642 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1059 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 634–642 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1059 Affect-LM: A Neural Language Model for Customizable Affective Text Generation Sayan Ghosh1, Mathieu Chollet1, Eugene Laksana1, Louis-Philippe Morency2 and Stefan Scherer1 1Institute for Creative Technologies, University of Southern California, CA, USA 2Language Technologies Institute, Carnegie Mellon University, PA, USA 1{sghosh,chollet,elaksana,scherer}@ict.usc.edu [email protected] Abstract Human verbal communication includes affective messages which are conveyed through use of emotionally colored words. There has been a lot of research in this direction but the problem of integrating state-of-the-art neural language models with affective information remains an area ripe for exploration. In this paper, we propose an extension to an LSTM (Long Short-Term Memory) language model for generating conversational text, conditioned on affect categories. Our proposed model, Affect-LM enables us to customize the degree of emotional content in generated sentences through an additional design parameter. Perception studies conducted using Amazon Mechanical Turk show that AffectLM generates naturally looking emotional sentences without sacrificing grammatical correctness. Affect-LM also learns affectdiscriminative word representations, and perplexity experiments show that additional affective information in conversational text can improve language model prediction. 1 Introduction Affect is a term that subsumes emotion and longer term constructs such as mood and personality and refers to the experience of feeling or emotion (Scherer et al., 2010). Picard (1997) provides a detailed discussion of the importance of affect analysis in human communication and interaction. Within this context the analysis of human affect from text is an important topic in natural language understanding, examples of which include sentiment analysis from Twitter (Nakov et al., 2016), affect analysis from poetry (Kao and Jurafsky, Affect-LM “I feel so …” Context Words “… great about this.” “… good about this.” “… awesome about this.” Affect Strength Low High Mid Affect Category ! " # $ % Automatic Inference (optional) et−1 ct−1 β Figure 1: Affect-LM is capable of generating emotionally colored conversational text in five specific affect categories (et−1) with varying affect strengths (β). Three generated example sentences for happy affect category are shown in three distinct affect strengths. 2012) and studies of correlation between function words and social/psychological processes (Pennebaker, 2011). People exchange verbal messages which not only contain syntactic information, but also information conveying their mental and emotional states. Examples include the use of emotionally colored words (such as furious and joy) and swear words. The automated processing of affect in human verbal communication is of great importance to understanding spoken language systems, particularly for emerging applications such as dialogue systems and conversational agents. Statistical language modeling is an integral component of speech recognition systems, with other applications such as machine translation and information retrieval. There has been a resurgence of research effort in recurrent neural networks for language modeling (Mikolov et al., 2010), which have yielded performances far superior to baseline language models based on n-gram approaches. However, there has not been much effort in building neural language models of text that leverage affective information. Current literature on deep learning for language understanding focuses mainly on representations based on 634 word semantics (Mikolov et al., 2013), encoderdecoder models for sentence representations (Cho et al., 2015), language modeling integrated with symbolic knowledge (Ahn et al., 2016) and neural caption generation (Vinyals et al., 2015), but to the best of our knowledge there has been no work on augmenting neural language modeling with affective information, or on data-driven approaches to generate emotional text. Motivated by these advances in neural language modeling and affective analysis of text, in this paper we propose a model for representation and generation of emotional text, which we call the Affect-LM. Our model is trained on conversational speech corpora, common in language modeling for speech recognition applications (Bulyko et al., 2007). Figure 1 provides an overview of our Affect-LM and its ability to generate emotionally colored conversational text in a number of affect categories with varying affect strengths. While these parameters can be manually tuned to generate conversational text, the affect category can also be automatically inferred from preceding context words. Specifically for model training, the affect category is derived from features generated using keyword spotting from a dictionary of emotional words, such as the LIWC (Linguistic Inquiry and Word Count) tool (Pennebaker et al., 2001). Our primary research questions in this paper are: Q1:Can Affect-LM be used to generate affective sentences for a target emotion with varying degrees of affect strength through a customizable model parameter? Q2:Are these generated sentences rated as emotionally expressive as well as grammatically correct in an extensive crowd-sourced perception experiment? Q3:Does the automatic inference of affect category from the context words improve language modeling performance of the proposed Affect-LM over the baseline as measured by perplexity? The remainder of this paper is organized as follows. In Section 2, we discuss prior work in the fields of neural language modeling, and generation of affective conversational text. In Section 3 we describe the baseline LSTM model and our proposed Affect-LM model. Section 4 details the experimental setup, and in Section 5, we discuss results for customizable emotional text generation, perception studies for each affect category, and perplexity improvements over the baseline model before concluding the paper in Section 6. 2 Related Work Language modeling is an integral component of spoken language systems, and traditionally ngram approaches have been used (Stolcke et al., 2002) with the shortcoming that they are unable to generalize to word sequences which are not in the training set, but are encountered in unseen data. Bengio et al. (2003) proposed neural language models, which address this shortcoming by generalizing through word representations. Mikolov et al. (2010) and Sundermeyer et al. (2012) extend neural language models to a recurrent architecture, where a target word wt is predicted from a context of all preceding words w1, w2, ..., wt−1 with an LSTM (Long Short-Term Memory) neural network. There also has been recent effort on building language models conditioned on other modalities or attributes of the data. For example, Vinyals et al. (2015) introduced the neural image caption generator, where representations learnt from an input image by a CNN (Convolutional Neural Network) are fed to an LSTM language model to generate image captions. Kiros et al. (2014) used an LBL model (Log-Bilinear language model) for two applications - image retrieval given sentence queries, and image captioning. Lower perplexity was achieved on text conditioned on images rather than language models trained only on text. In contrast, previous literature on affective language generation has not focused sufficiently on customizable state-of-the-art neural network techniques to generate emotional text, nor have they quantitatively evaluated their models on multiple emotionally colored corpora. Mahamood and Reiter (2011) use several NLG (natural language generation) strategies for producing affective medical reports for parents of neonatal infants undergoing healthcare. While they study the difference between affective and non-affective reports, their work is limited only to heuristic based systems and do not include conversational text. Mairesse and Walker (2007) developed PERSONAGE, a system for dialogue generation conditioned on extraversion dimensions. They trained regression models on ground truth judge’s selections to automatically determine which of the sentences selected by their model exhibit appropriate extroversion attributes. In Keshtkar and Inkpen (2011), the authors use heuristics and rule-based approaches 635 for emotional sentence generation. Their generation system is not training on large corpora and they use additional syntactic knowledge of parts of speech to create simple affective sentences. In contrast, our proposed approach builds on state-ofthe-art approaches for neural language modeling, utilizes no syntactic prior knowledge, and generates expressive emotional text. 3 Model 3.1 LSTM Language Model Prior to providing a formulation for our proposed model, we briefly describe a LSTM language model. We have chosen this model as a baseline since it has been reported to achieve state-of-the-art perplexities compared to other approaches, such as n-gram models with Kneser-Ney smoothing (Jozefowicz et al., 2016). Unlike an ordinary recurrent neural network, an LSTM network does not suffer from the vanishing gradient problem which is more pronounced for very long sequences (Hochreiter and Schmidhuber, 1997). Formally, by the chain rule of probability, for a sequence of M words w1, w2, ..., wM, the joint probability of all words is given by: P(w1, w2, ..., wM) = t=M Y t=1 P(wt|w1, w2, ...., wt−1) (1) If the vocabulary consists of V words, the conditional probability of word wt as a function of its context ct−1 = (w1, w2, ...., wt−1) is given by: P(wt = i|ct−1) = exp(UiT f(ct−1) + bi) PV j=1 exp(UjT f(ct−1) + bj) (2) f(.) is the output of an LSTM network which takes in the context words w1, w2, ..., wt−1 as inputs through one-hot representations, U is a matrix of word representations which on visualization we have found to correspond to POS (Part of Speech) information, while bi is a bias term capturing the unigram occurrence of word i. Equation 2 expresses the word wt as a function of its context for a LSTM language model which does not utilize any additional affective information. 3.2 Proposed Model: Affect-LM The proposed model Affect-LM has an additional energy term in the word prediction, and can be described by the following equation: P(wt = i|ct−1, et−1) = exp (UiT f(ct−1) + βViT g(et−1) + bi) PV j=1 exp(UjT f(ct−1) + βVjT g(et−1) + bj) (3) et−1 is an input vector which consists of affect category information obtained from the words in the context during training, and g(.) is the output of a network operating on et−1.Vi is an embedding learnt by the model for the i-th word in the vocabulary and is expected to be discriminative of the affective information conveyed by each word. In Figure 4 we present a visualization of these affective representations. The parameter β defined in Equation 3, which we call the affect strength defines the influence of the affect category information (frequency of emotionally colored words) on the overall prediction of the target word wt given its context. We can consider the formulation as an energy based model (EBM), where the additional energy term captures the degree of correlation between the predicted word and the affective input (Bengio et al., 2003). 3.3 Descriptors for Affect Category Information Our proposed model learns a generative model of the next word wt conditioned not only on the previous words w1, w2, ..., wt−1 but also on the affect category et−1 which is additional information about emotional content. During model training, the affect category is inferred from the context data itself. Thus we define a suitable feature extractor which can utilize an affective lexicon to infer emotion in the context. For our experiments, we have utilized the Linguistic Inquiry and Word Count (LIWC) text analysis program for feature extraction through keyword spotting. Introduced by Pennebaker et al. (2001), LIWC is based on a dictionary, where each word is assigned to a predefined LIWC category. The categories are chosen based on their association with social, affective, and cognitive processes. For example, the dictionary word worry is assigned to LIWC category anxiety. In our work, we have utilized all word categories of LIWC corresponding to affective processes: positive emotion, angry, sad, anxious, and negative emotion. Thus the descriptor et−1 has five features with each feature denoting 636 Corpus Name Conversations Words % Colored Words Content Fisher 11700 21167581 3.79 Conversations DAIC 688 677389 5.13 Conversations SEMAINE 959 23706 6.55 Conversations CMU-MOSI 93 26121 6.54 Monologues Table 1: Summary of corpora used in this paper. CMU-MOSI and SEMAINE are observed to have higher emotional content than Fisher and DAIC corpora. presence or absence of a specific emotion, which is obtained by binary thresholding of the features extracted from LIWC. For example, the affective representation of the sentence i will fight in the war is et−1 ={“sad”:0, “angry”:1, “anxiety”:0, “negative emotion”:1, “positive emotion”:0}. 3.4 Affect-LM for Emotional Text Generation Affect-LM can be used to generate sentences conditioned on the input affect category, the affect strength β, and the context words. For our experiments, we have chosen the following affect categories - positive emotion, anger, sad, anxiety, and negative emotion (which is a superclass of anger, sad and anxiety). As described in Section 3.2, the affect strength β defines the degree of dominance of the affect-dependent energy term on the word prediction in the language model, consequently after model training we can change β to control the degree of how “emotionally colored” a generated utterance is, varying from β = 0 (neutral; baseline model) to β = ∞(the generated sentences only consist of emotionally colored words, with no grammatical structure). When Affect-LM is used for generation, the affect categories could be either (1) inferred from the context using LIWC (this occurs when we provide sentence beginnings which are emotionally colored themselves), or (2) set to an input emotion descriptor e (this is obtained by setting e to a binary vector encoding the desired emotion and works even for neutral sentence beginnings). Given an initial starting set of M words w1, w2, ..., wM to complete, affect strength β, and the number of words N to generate each ith generated word is obtained by sampling from P(wi|w1, w2, ..., wi−1, e; β) for i ∈{M +1, M + 2, ..., M + N}. 4 Experimental Setup In Section 1, we have introduced three primary research questions related to the ability of the proposed Affect-LM model to generate emotionally colored conversational text without sacrificing grammatical correctness, and to obtain lower perplexity than a baseline LSTM language model when evaluated on emotionally colored corpora. In this section, we discuss our experimental setup to address these questions, with a description of Affect-LM’s architecture and the corpora used for training and evaluating the language models. 4.1 Speech Corpora The Fisher English Training Speech Corpus is the main corpus used for training the proposed model, in addition to which we have chosen three emotionally colored conversational corpora. A brief description of each corpus is given below, and in Table 1, we report relevant statistics, such as the total number of words, along with the fraction of emotionally colored words (those belonging to the LIWC affective word categories) in each corpus. Fisher English Training Speech Parts 1 & 2: The Fisher dataset (Cieri et al., 2004) consists of speech from telephonic conversations of 10 minutes each, along with their associated transcripts. Each conversation is between two strangers who are requested to speak on a randomly selected topic from a set. Examples of conversation topics are Minimum Wage, Time Travel and Comedy. Distress Assessment Interview Corpus (DAIC): The DAIC corpus introduced by Gratch (2014) consists of 70+ hours of dyadic interviews between a human subject and a virtual human, where the virtual human asks questions designed to diagnose symptoms of psychological distress in the subject such as depression or PTSD (Post Traumatic Stress Disorder). SEMAINE dataset: SEMAINE (McKeown et al., 2012) is a large audiovisual corpus consisting of interactions between subjects and an operator simulating a SAL (Sensitive Artificial Listener). There are a total of 959 conversations which are approximately 5 minutes each, and are transcribed and annotated with affective dimensions. Multimodal Opinion-level Sentiment Intensity Dataset (CMU-MOSI): (Zadeh et al., 2016) This is a multimodal annotated corpus of opinion 637 videos where in each video a speaker expresses his opinion on a commercial product. The corpus consist of speech from 93 videos from 89 distinct speakers (41 male and 48 female speakers). This corpus differs from the others since it contains monologues rather than conversations. While we find that all corpora contain spoken language, they have the following characteristics different from the Fisher corpus: (1) More emotional content as observed in Table 1, since they have been generated through a human subject’s spontaneous replies to questions designed to generate an emotional response, or from conversations on emotion-inducing topics (2) Domain mismatch due to recording environment (for example, the DAIC corpus was created in a mental health setting, while the CMU-MOSI corpus consisted of opinion videos uploaded online). (3) Significantly smaller than the Fisher corpus, which is 25 times the size of the other corpora combined. Thus, we perform training in two separate stages - training of the baseline and Affect-LM models on the Fisher corpus, and subsequent adaptation and fine-tuning on each of the emotionally colored corpora. 4.2 Affect-LM Neural Architecture For our experiments, we have implemented a baseline LSTM language model in Tensorflow (Abadi et al., 2016), which follows the non-regularized implementation as described in Zaremba et al. (2014) and to which we have added a separate energy term for the affect category in implementing Affect-LM. We have used a vocabulary of 10000 words and an LSTM network with 2 hidden layers and 200 neurons per hidden layer. The network is unrolled for 20 time steps, and the size of each minibatch is 20. The affect category et−1 is processed by a multi-layer perceptron with a single hidden layer of 100 neurons and sigmoid activation function to yield g(et−1). We have set the output layer size to 200 for both f(ct−1) and g(et−1). We have kept the network architecture constant throughout for ease of comparison between the baseline and Affect-LM. 4.3 Language Modeling Experiments Affect-LM can also be used as a language model where the next predicted word is estimated from the words in the context, along with an affect category extracted from the context words themselves (instead of being encoded externally as in generation). To evaluate whether additional emotional information could improve the prediction performance, we train the corpora detailed in Section 4.1 in two stages as described below: (1) Training and validation of the language models on Fisher dataset- The Fisher corpus is split in a 75:15:10 ratio corresponding to the training, validation and evaluation subsets respectively, and following the implementation in Zaremba et al. (2014), we train the language models (both the baseline and Affect-LM) on the training split for 13 epochs, with a learning rate of 1.0 for the first four epochs, and the rate decreasing by a factor of 2 after every subsequent epoch. The learning rate and neural architecture are the same for all models. We validate the model over the affect strength β ∈[1.0, 1.5, 1.75, 2.0, 2.25, 2.5, 3.0]. The best performing model on the Fisher validation set is chosen and used as a seed for subsequent adaptation on the emotionally colored corpora. (2) Fine-tuning the seed model on other corpora- Each of the three corpora - CMU-MOSI, DAIC and SEMAINE are split in a 75:15:10 ratio to create individual training, validation and evaluation subsets. For both the baseline and AffectLM, the best performing model from Stage 1 (the seed model) is fine-tuned on each of the training corpora, with a learning rate of 0.25 which is constant throughout, and a validation grid of β ∈[1.0, 1.5, 1.75, 2.0]. For each model adapted on a corpus, we compare the perplexities obtained by Affect-LM and the baseline model when evaluated on that corpus. 4.4 Sentence Generation Perception Study We assess Affect-LM’s ability to generate emotionally colored text of varying degrees without severely deteriorating grammatical correctness, by conducting an extensive perception study on Amazon’s Mechanical Turk (MTurk) platform. The MTurk platform has been successfully used in the past for a wide range of perception experiments and has been shown to be an excellent resource to collect human ratings for large studies (Buhrmester et al., 2011). Specifically, we generated more than 200 sentences for four sentence beginnings (namely the three sentence beginnings listed in Table 2 as well as an end of sentence token indicating that the model should generate a new sentence) in five affect categories happy(positive emotion), angry, sad, anxiety, and negative emotion. The Affect-LM model trained 638 Beginning Affect Category Completed sentence I feel so Happy good because i think that it’s important to have a relationship with a friend Angry bad that i hate it and i hate that because they they kill themselves and then they fight Sad sad to miss because i i miss the feelings of family members who i lost feelings with Anxious horrible i mean i think when we’re going to you know war and alert alert and we’re actually gonna die Neutral bad if i didn’t know that the decision was going on I told him to Happy be honest and i said well i hope that i ’m going to be a better person Angry see why he was fighting with my son Sad leave the house because i hurt one and i lost his leg and hurt him Anxious be afraid of him and he he just he just didn’t care about the death penalty Neutral do this position i think he is he’s got a lot of money he has to pay himself a lot of money Why did you Happy have a best friend Angry say it was only a criminal being killed at a war or something Sad miss your feelings Anxious worry about fear factor Neutral believe in divorce Table 2: Example sentences generated by the model conditioned on different affect categories on the Fisher corpus was used for sentence generation. Each sentence was evaluated by two human raters that have a minimum approval rating of 98% and are located in the United States. The human raters were instructed that the sentences should be considered to be taken from a conversational rather than a written context: repetitions and pause fillers (e.g., um, uh) are common and no punctuation is provided. The human raters evaluated each sentence on a seven-point Likert scale for the five affect categories, overall affective valence as well as the sentence’s grammatical correctness and were paid 0.05USD per sentence. We measured inter-rater agreement using Krippendorffs α and observed considerable agreement between raters across all categories (e.g., for valence α = 0.510 and grammatical correctness α = 0.505). For each target emotion (i.e., intended emotion of generated sentences) we conducted an initial MANOVA, with human ratings of affect categories the DVs (dependent variables) and the affect strength parameter β the IV (independent variable). We then conducted follow-up univariate ANOVAs to identify which DV changes significantly with β. In total we conducted 5 MANOVAs and 30 follow-up ANOVAs, which required us to update the significance level to p<0.001 following a Bonferroni correction. 5 Results 5.1 Generation of Emotional Text In Section 3.4 we have described the process of sampling text from the model conditioned on input affective information (research question Q1). Table 2 shows three sentences generated by the model for input sentence beginnings I feel so ..., Why did you ... and I told him to ... for each of five affect categories - happy(positive emotion), angry, sad anxiety, and neutral(no emotion). They have been selected from a pool of 20 generated sentences for each category and sentence beginning. 5.2 MTurk Perception Experiments In the following we address research question Q2 by reporting the main statistical findings of our MTurk study, which are visualized in Figures 2 and 3. Positive Emotion Sentences. The multivariate result was significant for positive emotion generated sentences (Pillai’s Trace=.327, F(4,437)=6.44, p<.0001). Follow up ANOVAs revealed significant results for all DVs except angry with p<.0001, indicating that both affective valence and happy DVs were successfully manipulated with β, as seen in Figure 2(a). Grammatical correctness was also significantly influenced by the affect strength parameter β and results show that the correctness deteriorates with increasing β (see Figure 3). However, a post-hoc Tukey test revealed that only the highest β value shows a significant drop in grammatical correctness at p<.05. Negative Emotion Sentences. The multivariate result was significant for negative emotion generated sentences (Pillai’s Trace=.130, F(4,413)=2.30, p<.0005). Follow up ANOVAs revealed significant results for affective valence and happy DVs with p<.0005, indicating that the affective valence DV was successfully manipulated with β, as seen in Figure 2(b). Further, as intended there were no significant differences for DVs angry, sad and anxious, indicating that the negative emotion DV refers to a more general affect related concept rather than a specific negative emotion. This finding is in concordance with the intended LIWC category of negative affect that forms a parent category above the more 639 1 2 3 4 5 6 7 (a) Positive Emotion (b) Negative Emotion (c) Angry (d) Sad (e) Anxious 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 Anxious Happy Angry Emotional Strength (beta) Emotion Ratings Sad Affect Valence Figure 2: Amazon Mechanical Turk study results for generated sentences in the target affect categories positive emotion, negative emotion, angry, sad, and anxious (a)-(e). The most relevant human rating curve for each generated emotion is highlighted in red, while less relevant rating curves are visualized in black. Affect categories are coded via different line types and listed in legend below figure. specific emotions, such as angry, sad, and anxious (Pennebaker et al., 2001). Grammatical correctness was also significantly influenced by the affect strength β and results show that the correctness deteriorates with increasing β (see Figure 3). As for positive emotion, a post-hoc Tukey test revealed that only the highest β value shows a significant drop in grammatical correctness at p<.05. Angry Sentences. The multivariate result was significant for angry generated sentences (Pillai’s Trace=.199, F(4,433)=3.76, p<.0001). Follow up ANOVAs revealed significant results for affective valence, happy, and angry DVs with p<.0001, indicating that both affective valence and angry DVs were successfully manipulated with β, as seen in Figure 2(c). Grammatical correctness was not significantly influenced by the affect strength parameter β, which indicates that angry sentences are highly stable across a wide range of β (see Figure 3). However, it seems that human raters could not successfully distinguish between angry, sad, and anxious affect categories, indicating that the generated sentences likely follow a general negative affect dimension. Sad Sentences. The multivariate result was significant for sad generated sentences (Pillai’s Trace=.377, F(4,425)=7.33, p<.0001). Follow up ANOVAs revealed significant results only for the sad DV with p<.0001, indicating that while the sad DV can be successfully manipulated with β, as seen in Figure 2(d). The grammatical correctness deteriorates significantly with β. Specifically, a post-hoc Tukey test revealed that only the two highest β values show a significant drop in grammatical correctness at p<.05 (see Figure 3). 0 1 2 3 4 5 Emotional Strength (beta) 1 2 3 4 5 6 7 Grammatical Correctness Ratings Grammatical Evaluation Happy Angry Sad Anxious Negative Affect Figure 3: Mechanical Turk study results for grammatical correctness for all generated target emotions. Perceived grammatical correctness for each affect categories are color-coded. A post-hoc Tukey test for sad reveals that β = 3 is optimal for this DV, since it leads to a significant jump in the perceived sadness scores at p<.005 for β ∈{0, 1, 2}. Anxious Sentences. The multivariate result was significant for anxious generated sentences (Pillai’s Trace=.289, F(4,421)=6.44, p<.0001). Follow up ANOVAs revealed significant results for affective valence, happy and anxious DVs with p<.0001, indicating that both affective valence and anxiety DVs were successfully manipulated with β, as seen in Figure 2(e). Grammatical correctness was also significantly influenced by the affect strength parameter β and results show that the correctness deteriorates with increasing β. Similarly for sad, a post-hoc Tukey test revealed that only the two highest β values show a significant drop in grammatical correctness at p<.05 (see Figure 3). Again, a post-hoc Tukey test for anxious reveals that β = 3 is optimal for this DV, since it leads to a significant jump in the perceived 640 Training (Fisher) Adaptation Perplexity Baseline Affect-LM Baseline Affect-LM Fisher 37.97 37.89 DAIC 65.02 64.95 55.86 55.55 SEMAINE 88.18 86.12 57.58 57.26 CMU-MOSI 104.74 101.19 66.72 64.99 Average 73.98 72.54 60.05 59.26 Table 3: Evaluation perplexity scores obtained by the baseline and Affect-LM models when trained on Fisher and subsequently adapted on DAIC, SEMAINE and CMU-MOSI corpora anxiety scores at p<.005 for β ∈{0, 1, 2}. 5.3 Language Modeling Results In Table 3, we address research question Q3 by presenting the perplexity scores obtained by the baseline model and Affect-LM, when trained on the Fisher corpus and subsequently adapted on three emotional corpora (each adapted model is individually trained on CMU-MOSI, DAIC and SEMAINE). The models trained on Fisher are evaluated on all corpora while each adapted model is evaluated only on it’s respective corpus. For all corpora, we find that Affect-LM achieves lower perplexity on average than the baseline model, implying that affect category information obtained from the context words improves language model prediction. The average perplexity improvement is 1.44 (relative improvement 1.94%) for the model trained on Fisher, while it is 0.79 (1.31%) for the adapted models. We note that larger improvements in perplexity are observed for corpora with higher content of emotional words. This is supported by the results in Table 3, where AffectLM obtains a larger reduction in perplexity for the CMU-MOSI and SEMAINE corpora, which respectively consist of 2.76% and 2.75% more emotional words than the Fisher corpus. 5.4 Word Representations In Equation 3, Affect-LM learns a weight matrix V which captures the correlation between the predicted word wt, and the affect category et−1. Thus, each row of the matrix Vi is an emotionally meaningful embedding of the i-th word in the vocabulary. In Figure 4, we present a t-SNE visualization of these embeddings, where each data point is a separate word, and words which appear in the LIWC dictionary are colored based on which affect category they belong to (we have labeled only words in categories positive emotion, negative emotion, anger, sad and anxiety since Figure 4: Embeddings learnt by Affect-LM these categories contain the most frequent words). Words colored grey are those not in the LIWC dictionary. In Figure 4, we observe that the embeddings contain affective information, where the positive emotion is highly separated from the negative emotions (sad, angry, anxiety) which are clustered together. 6 Conclusions and Future Work In this paper, we have introduced a novel language model Affect-LM for generating affective conversational text conditioned on context words, an affective category and an affective strength parameter. MTurk perception studies show that the model can generate expressive text at varying degrees of emotional strength without affecting grammatical correctness. We also evaluate Affect-LM as a language model and show that it achieves lower perplexity than a baseline LSTM model when the affect category is obtained from the words in the context. For future work, we wish to extend this model by investigating language generation conditioned on other modalities such as facial images and speech, and to applications such as dialogue generation for virtual agents. Acknowledgments This material is based upon work supported by the U.S. Army Research Laboratory under contract number W911NF-14-D-0005. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Government, and no official endorsement should be inferred. Sayan Ghosh also acknowledges the Viterbi Graduate School Fellowship for funding his graduate studies. 641 References Sungjin Ahn, Heeyoul Choi, Tanel P¨arnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. arXiv preprint arXiv:1608.00318 . Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research 3(Feb):1137–1155. Michael Buhrmester, Tracy Kwang, and Samuel D Gosling. 2011. Amazon’s mechanical turk a new source of inexpensive, yet high-quality, data? Perspectives on psychological science 6(1):3–5. Ivan Bulyko, Mari Ostendorf, Manhung Siu, Tim Ng, Andreas Stolcke, and ¨Ozg¨ur C¸ etin. 2007. Web resources for language modeling in conversational speech recognition. ACM Transactions on Speech and Language Processing (TSLP) 5(1):1. Kyunghyun Cho, Aaron Courville, and Yoshua Bengio. 2015. Describing multimedia content using attention-based encoder-decoder networks. IEEE Transactions on Multimedia 17(11):1875–1886. Christopher Cieri, David Miller, and Kevin Walker. 2004. The fisher corpus: a resource for the next generations of speech-to-text. In LREC. volume 4, pages 69–71. Mart´ın Abadi et al. 2016. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI). Savannah, Georgia, USA. Jonathan et al. Gratch. 2014. The distress analysis interview corpus of human and computer interviews. In LREC. Citeseer, pages 3123–3128. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410 . Justine Kao and Dan Jurafsky. 2012. A computational analysis of style, affect, and imagery in contemporary poetry. Fazel Keshtkar and Diana Inkpen. 2011. A patternbased model for generating text to express emotion. In Affective Computing and Intelligent Interaction, Springer, pages 11–21. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. 2014. Multimodal neural language models. Saad Mahamood and Ehud Reiter. 2011. Generating affective natural language for parents of neonatal infants. In Proceedings of the 13th European Workshop on Natural Language Generation. Association for Computational Linguistics, pages 12–21. Franc¸ois Mairesse and Marilyn Walker. 2007. Personage: Personality generation for dialogue. Gary McKeown, Michel Valstar, Roddy Cowie, Maja Pantic, and Marc Schroder. 2012. The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Transactions on Affective Computing 3(1):5–17. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech. volume 2, page 3. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, and Veselin Stoyanov. 2016. Semeval2016 task 4: Sentiment analysis in twitter. Proceedings of SemEval pages 1–18. James W Pennebaker. 2011. The secret life of pronouns. New Scientist 211(2828):42–45. James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates 71(2001):2001. Rosalind Picard. 1997. Affective computing, volume 252. MIT press Cambridge. Klaus R Scherer, Tanja B¨anziger, and Etienne Roesch. 2010. A Blueprint for Affective Computing: A sourcebook and manual. Oxford University Press. Andreas Stolcke et al. 2002. Srilm-an extensible language modeling toolkit. In Interspeech. volume 2002, page 2002. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Interspeech. pages 194–197. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems 31(6):82–88. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 . 642
2017
59
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 56–68 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1006 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 56–68 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1006 Morph-fitting: Fine-Tuning Word Vector Spaces with Simple Language-Specific Rules Ivan Vuli´c1 , Nikola Mrkši´c1, Roi Reichart2 Diarmuid Ó Séaghdha3, Steve Young1 , Anna Korhonen1 1 University of Cambridge 2 Technion, Israel Institute of Technology 3 Apple Inc. {iv250,nm480,sjy11,alk23}@cam.ac.uk [email protected] [email protected] Abstract Morphologically rich languages accentuate two properties of distributional vector space models: 1) the difficulty of inducing accurate representations for lowfrequency word forms; and 2) insensitivity to distinct lexical relations that have similar distributional signatures. These effects are detrimental for language understanding systems, which may infer that inexpensive is a rephrasing for expensive or may not associate acquire with acquires. In this work, we propose a novel morph-fitting procedure which moves past the use of curated semantic lexicons for improving distributional vector spaces. Instead, our method injects morphological constraints generated using simple language-specific rules, pulling inflectional forms of the same word close together and pushing derivational antonyms far apart. In intrinsic evaluation over four languages, we show that our approach: 1) improves low-frequency word estimates; and 2) boosts the semantic quality of the entire word vector collection. Finally, we show that morph-fitted vectors yield large gains in the downstream task of dialogue state tracking, highlighting the importance of morphology for tackling long-tail phenomena in language understanding tasks. 1 Introduction Word representation learning has become a research area of central importance in natural language processing (NLP), with its usefulness demonstrated across many application areas such as parsing (Chen and Manning, 2014; Johannsen et al., 2015), machine translation (Zou et al., 2013), and many others (Turian et al., 2010; Collobert et al., 2011). Most prominent word representation techniques are grounded in the distributional hypothesis (Harris, 1954), relying on word co-occurrence information in large textual corpora (Curran, 2004; Turney and Pantel, 2010; Mikolov et al., 2013; Mnih and Kavukcuoglu, 2013; Levy and Goldberg, 2014; Schwartz et al., 2015, i.a.). Morphologically rich languages, in which “substantial grammatical information. . . is expressed at word level” (Tsarfaty et al., 2010), pose specific challenges for NLP. This is not always considered when techniques are evaluated on languages such as English or Chinese, which do not have rich morphology. In the case of distributional vector space models, morphological complexity brings two challenges to the fore: 1. Estimating Rare Words: A single lemma can have many different surface realisations. Naively treating each realisation as a separate word leads to sparsity problems and a failure to exploit their shared semantics. On the other hand, lemmatising the entire corpus can obfuscate the differences that exist between different word forms even though they share some aspects of meaning. 2. Embedded Semantics: Morphology can encode semantic relations such as antonymy (e.g. literate and illiterate, expensive and inexpensive) or (near-)synonymy (north, northern, northerly). In this work, we tackle the two challenges jointly by introducing a resource-light vector space finetuning procedure termed morph-fitting. The proposed method does not require curated knowledge bases or gold lexicons. Instead, it makes use of the observation that morphology implicitly encodes semantic signals pertaining to synonymy (e.g., German word inflections katalanisch, katalanischem, katalanischer denote the same semantic concept in different grammatical roles), and antonymy (e.g., mature vs. immature), capitalising on the 56 en_expensive de_teure it_costoso en_slow de_langsam it_lento en_book de_buch it_libro costly teuren dispendioso fast allmählich lentissimo books sachbuch romanzo costlier kostspielige remunerativo slowness rasch lenta memoir buches racconto cheaper aufwändige redditizio slower gemächlich inesorabile novel romandebüt volumetto prohibitively kostenintensive rischioso slowed schnell rapidissimo storybooks büchlein saggio pricey aufwendige costosa slowing explosionsartig graduale blurb pamphlet ecclesiaste expensiveness teures costosa slowing langsamer lenti booked bücher libri costly teuren costose slowed langsames lente rebook büch libra costlier teurem costosi slowness langsame lenta booking büche librare ruinously teurer dispendioso slows langsamem veloce rebooked büches libre unaffordable teurerer dispendiose idle langsamen rapido books büchen librano Table 1: The nearest neighbours of three example words (expensive, slow and book) in English, German and Italian before (top) and after (bottom) morph-fitting. proliferation of word forms in morphologically rich languages. Formalised as an instance of the post-processing semantic specialisation paradigm (Faruqui et al., 2015; Mrkši´c et al., 2016), morphfitting is steered by a set of linguistic constraints derived from simple language-specific rules which describe (a subset of) morphological processes in a language. The constraints emphasise similarity on one side (e.g., by extracting morphological synonyms), and antonymy on the other (by extracting morphological antonyms), see Fig. 1 and Tab. 2. The key idea of the fine-tuning process is to pull synonymous examples described by the constraints closer together in the transformed vector space, while at the same time pushing antonymous examples away from each other. The explicit post-hoc injection of morphological constraints enables: a) the estimation of more accurate vectors for lowfrequency words which are linked to their highfrequency forms by the constructed constraints;1 this tackles the data sparsity problem; and b) specialising the distributional space to distinguish between similarity and relatedness (Kiela et al., 2015), thus supporting language understanding applications such as dialogue state tracking (DST).2 As a post-processor, morph-fitting allows the integration of morphological rules with any distributional vector space in any language: it treats an input distributional word vector space as a black box and fine-tunes it so that the transformed space reflects the knowledge coded in the input morphological constraints (e.g., Italian words rispettoso and irrispetosa should be far apart in the trans1For instance, the vector for the word katalanischem which occurs only 9 times in the German Wikipedia will be pulled closer to the more reliable vectors for katalanisch and katalanischer, with frequencies of 2097 and 1383 respectively. 2Representation models that do not distinguish between synonyms and antonyms may have grave implications in downstream language understanding applications such as spoken dialogue systems: a user looking for ‘an affordable Chinese restaurant in west Cambridge’ does not want a recommendation for ‘an expensive Thai place in east Oxford’. rispettoso rispettosa rispettosi irrispettoso irrispettosa irrispettosi Figure 1: Morph-fitting in Italian. Representations for rispettoso, rispettosa, rispettosi (EN: respectful), are pulled closer together in the vector space (solid lines; ATTRACT constraints). At the same time, the model pushes them away from their antonyms (dashed lines; REPEL constraints) irrispettoso, irrispettosa, irrispettosi (EN: disrespectful), obtained through morphological affix transformation captured by language-specific rules (e.g., adding the prefix ir- typically negates the base word in Italian) formed vector space, see Fig. 1). Tab. 1 illustrates the effects of morph-fitting by qualitative examples in three languages: the vast majority of nearest neighbours are “morphological” synonyms. We demonstrate the efficacy of morph-fitting in four languages (English, German, Italian, Russian), yielding large and consistent improvements on benchmarking word similarity evaluation sets such as SimLex-999 (Hill et al., 2015), its multilingual extension (Leviant and Reichart, 2015), and SimVerb-3500 (Gerz et al., 2016). The improvements are reported for all four languages, and with a variety of input distributional spaces, verifying the robustness of the approach. We then show that incorporating morph-fitted vectors into a state-of-the-art neural-network DST model results in improved tracking performance, especially for morphologically rich languages. We report an improvement of 4% on Italian, and 6% on German when using morph-fitted vectors instead of the distributional ones, setting a new state-of-theart DST performance for the two datasets.3 3There are no readily available DST datasets for Russian. 57 2 Morph-fitting: Methodology Preliminaries In this work, we focus on four languages with varying levels of morphological complexity: English (EN), German (DE), Italian (IT), and Russian (RU). These correspond to languages in the Multilingual SimLex-999 dataset. Vocabularies Wen, Wde, Wit, Wru are compiled by retaining all word forms from the four Wikipedias with word frequency over 10, see Tab. 3. We then extract sets of linguistic constraints from these (large) vocabularies using a set of simple language-specific if-then-else rules, see Tab. 2.4 These constraints (Sect. 2.2) are used as input for the vector space post-processing ATTRACT-REPEL algorithm (outlined in Sect. 2.1). 2.1 The ATTRACT-REPEL Model The ATTRACT-REPEL model, proposed by Mrkši´c et al. (2017b), is an extension of the PARAGRAM procedure proposed by Wieting et al. (2015). It provides a generic framework for incorporating similarity (e.g. successful and accomplished) and antonymy constraints (e.g. nimble and clumsy) into pre-trained word vectors. Given the initial vector space and collections of ATTRACT and REPEL constraints A and R, the model gradually modifies the space to bring the designated word vectors closer together or further apart. The method’s cost function consists of three terms. The first term pulls the ATTRACT examples (xl, xr) ∈A closer together. If BA denotes the current mini-batch of ATTRACT examples, this term can be expressed as: A(BA) = X (xl,xr)∈BA (ReLU (δatt + xltl −xlxr) + ReLU (δatt + xrtr −xlxr)) where δatt is the similarity margin which determines how much closer synonymous vectors should be to each other than to each of their respective negative examples. ReLU(x) = max(0, x) is the standard rectified linear unit (Nair and Hinton, 2010). The ‘negative’ example ti for each word xi in any ATTRACT pair is the word vector closest to xi among the examples in the current minibatch (distinct from its target synonym and xi itself). This means that this term forces synonymous 4A native speaker can easily come up with these sets of morphological rules (or at least with a reasonable subset of them) without any linguistic training. What is more, the rules for DE, IT, and RU were created by non-native, non-fluent speakers with a limited knowledge of the three languages, exemplifying the simplicity and portability of the approach. English German Italian (discuss, discussed) (schottisch, schottischem) (golfo, golfi) (laugh, laughing) (damalige, damaligen) (minato, minata) (pacifist, pacifists) (kombiniere, kombinierte) (mettere, metto) (evacuate, evacuated) (schweigt, schweigst) (crescono, cresci) (evaluate, evaluates) (hacken, gehackt) (crediti, credite) (dressed, undressed) (stabil, unstabil) (abitata, inabitato) (similar, dissimilar) (geformtes, ungeformt) (realtà, irrealtà) (formality, informality) (relevant, irrelevant) (attuato, inattuato) Table 2: Example synonymous (inflectional; top) and antonymous (derivational; bottom) constraints. words from the in-batch ATTRACT constraints to be closer to one another than to any other word in the current mini-batch. The second term pushes antonyms away from each other. If (xl, xr) ∈BR is the current minibatch of REPEL constraints, this term can be expressed as follows: R(BR) = X (xl,xr)∈BR (ReLU (δrpl + xlxr −xltr) + ReLU (δrpl + xlxr −xrtr)) In this case, each word’s ‘negative’ example is the (in-batch) word vector furthest away from it (and distinct from the word’s target antonym). The intuition is that we want antonymous words from the input REPEL constraints to be further away from each other than from any other word in the current mini-batch; δrpl is now the repel margin. The final term of the cost function serves to retain the abundance of semantic information encoded in the starting distributional space. If xinit i is the initial distributional vector and V (B) is the set of all vectors present in the given mini-batch, this term (per mini-batch) is expressed as follows: R(BA, BR) = X xi∈V (BA∪BR) λreg xinit i −xi 2 where λreg is the L2 regularisation constant.5 This term effectively pulls word vectors towards their initial (distributional) values, ensuring that relations encoded in initial vectors persist as long as they do not contradict the newly injected ones. 2.2 Language-Specific Rules and Constraints Semantic Specialisation with Constraints The fine-tuning ATTRACT-REPEL procedure is entirely driven by the input ATTRACT and REPEL sets of 5We use hyperparameter values δatt = 0.6, δrpl = 0.0, λreg = 10−9 from prior work without fine-tuning. We train all models for 10 epochs with AdaGrad (Duchi et al., 2011). 58 |W| |A| |R| English 1,368,891 231,448 45,964 German 1,216,161 648,344 54,644 Italian 541,779 278,974 21,400 Russian 950,783 408,400 32,174 Table 3: Vocabulary sizes and counts of ATTRACT (A) and REPEL (R) constraints. constraints. These can be extracted from a variety of semantic databases such as WordNet (Fellbaum, 1998), the Paraphrase Database (Ganitkevitch et al., 2013; Pavlick et al., 2015), or BabelNet (Navigli and Ponzetto, 2012; Ehrmann et al., 2014) as done in prior work (Faruqui et al., 2015; Wieting et al., 2015; Mrkši´c et al., 2016, i.a.). In this work, we investigate another option: extracting constraints without curated knowledge bases in a spectrum of languages by exploiting inherent language-specific properties related to linguistic morphology. This relaxation ensures a wider portability of ATTRACTREPEL to languages and domains without readily available or adequate resources. Extracting ATTRACT Pairs The core difference between inflectional and derivational morphology can be summarised in a few lines as follows: the former refers to a set of processes through which the word form expresses meaningful syntactic information, e.g., verb tense, without any change to the semantics of the word. On the other hand, the latter refers to the formation of new words with semantic shifts in meaning (Schone and Jurafsky, 2001; Haspelmath and Sims, 2013; Lazaridou et al., 2013; Zeller et al., 2013; Cotterell and Schütze, 2017). For the ATTRACT constraints, we focus on inflectional rather than on derivational morphology rules as the former preserve the full meaning of a word, modifying it only to reflect grammatical roles such as verb tense or case markers (e.g., (en_read, en_reads) or (de_katalanisch, de_katalanischer)). This choice is guided by our intent to fine-tune the original vector space in order to improve the embedded semantic relations. We define two rules for English, widely recognised as morphologically simple (Avramidis and Koehn, 2008; Cotterell et al., 2016b). These are: (R1) if w1, w2 ∈Wen, where w2 = w1 + ing/ed/s, then add (w1, w2) and (w2, w1) to the set of ATTRACT constraints A. This rule yields pairs such as (look, looks), (look, looking), (look, looked). If w[: −1] is a function which strips the last character from word w, the second rule is: (R2) if w1 ends with the letter e and w1 ∈Wen and w2 ∈Wen, where w2 = w1[: −1] + ing/ed, then add (w1, w2) and (w2, w1) to A. This creates pairs such as (create, creating) and (create, created). Naturally, introducing more sophisticated rules is possible in order to cover for other special cases and morphological irregularities (e.g., sweep / swept), but in all our EN experiments, A is based on the two simple EN rules R1 and R2. The other three languages, with more complicated morphology, yield a larger number of rules. In Italian, we rely on the sets of rules spanning: (1) regular formation of plural (libro / libri); (2) regular verb conjugation (aspettare / aspettiamo); (3) regular formation of past participle (aspettare / aspettato); and (4) rules regarding grammatical gender (bianco / bianca). Besides these, another set of rules is used for German and Russian: (5) regular declension (e.g., asiatisch / asiatischem). Extracting REPEL Pairs As another source of implicit semantic signals, W also contains words which represent derivational antonyms: e.g., two words that denote concepts with opposite meanings, generated through a derivational process. We use a standard set of EN “antonymy” prefixes: APen = {dis, il, un, in, im, ir, mis, non, anti} (Fromkin et al., 2013). If w1, w2 ∈Wen, where w2 is generated by adding a prefix from APen to w1, then (w1, w2) and (w2, w1) are added to the set of REPEL constraints R. This rule generates pairs such as (advantage, disadvantage) and (regular, irregular). An additional rule replaces the suffix -ful with -less, extracting antonyms such as (careful, careless). Following the same principle, we use APde = {un, nicht, anti, ir, in, miss}, APit = {in, ir, im, anti}, and APru = {не, анти}. For instance, this generates an IT pair (rispettoso, irrispettoso) (see Fig. 1). For DE, we use another rule targeting suffix replacement: -voll is replaced by -los. We further expand the set of REPEL constraints by transitively combining antonymy pairs from the previous step with inflectional ATTRACT pairs. This step yields additional constraints such as (rispettosa, irrispettosi) (see Fig. 1). The final A and R constraint counts are given in Tab. 3. The full sets of rules are available as supplemental material. 3 Experimental Setup Training Data and Setup For each of the four languages we train the skip-gram with negative sampling (SGNS) model (Mikolov et al., 2013) 59 on the latest Wikipedia dump of each language. We induce 300-dimensional word vectors, with the frequency cut-off set to 10. The vocabulary sizes |W| for each language are provided in Tab. 3.6 We label these collections of vectors SGNS-LARGE. Other Starting Distributional Vectors We also analyse the impact of morph-fitting on other collections of well-known EN word vectors. These vectors have varying vocabulary coverage and are trained with different architectures. We test standard distributional models: Common-Crawl GloVe (Pennington et al., 2014), SGNS vectors (Mikolov et al., 2013) with various contexts (BOW = bag-ofwords; DEPS = dependency contexts), and training data (PW = Polyglot Wikipedia from Al-Rfou et al. (2013); 8B = 8 billion token word2vec corpus), following (Levy and Goldberg, 2014) and (Schwartz et al., 2015). We also test the symmetricpattern based vectors of Schwartz et al. (2016) (SymPat-Emb), count-based PMI-weighted vectors reduced by SVD (Baroni et al., 2014) (Count-SVD), a model which replaces the context modelling function from CBOW with bidirectional LSTMs (Melamud et al., 2016) (Context2Vec), and two sets of EN vectors trained by injecting multilingual information: BiSkip (Luong et al., 2015) and MultiCCA (Faruqui and Dyer, 2014). We also experiment with standard well-known distributional spaces in other languages (IT and DE), available from prior work (Dinu et al., 2015; Luong et al., 2015; Vuli´c and Korhonen, 2016a). Morph-fixed Vectors A baseline which utilises an equal amount of knowledge as morph-fitting, termed morph-fixing, fixes the vector of each word to the distributional vector of its most frequent inflectional synonym, tying the vectors of lowfrequency words to their more frequent inflections. For each word w1, we construct a set of M + 1 words Ww1 = {w1, w′ 1, . . . , w′ M} consisting of the word w1 itself and all M words which cooccur with w1 in the ATTRACT constraints. We then choose the word w′ max from the set Ww1 with the maximum frequency in the training data, and fix all other word vectors in Ww1 to its word vector. The morph-fixed vectors (MFIX) serve as our primary baseline, as they outperformed another straightforward baseline based on stemming across 6Other SGNS parameters were set to standard values (Baroni et al., 2014; Vuli´c and Korhonen, 2016b): 15 epochs, 15 negative samples, global learning rate: .025, subsampling rate: 1e −4. Similar trends in results persist with d = 100, 500. all of our intrinsic and extrinsic experiments. Morph-fitting Variants We analyse two variants of morph-fitting: (1) using ATTRACT constraints only (MFIT-A), and (2) using both ATTRACT and REPEL constraints (MFIT-AR). 4 Intrinsic Evaluation: Word Similarity Evaluation Setup and Datasets The first set of experiments intrinsically evaluates morph-fitted vector spaces on word similarity benchmarks, using Spearman’s rank correlation as the evaluation metric. First, we use the SimLex-999 dataset, as well as SimVerb-3500, a recent EN verb pair similarity dataset providing similarity ratings for 3,500 verb pairs.7 SimLex-999 was translated to DE, IT, and RU by Leviant and Reichart (2015), and they crowdsourced similarity scores from native speakers. We use this dataset for our multilingual evaluation.8 Morph-fitting EN Word Vectors As the first experiment, we morph-fit a wide spectrum of EN distributional vectors induced by various architectures (see Sect. 3). The results on SimLex and SimVerb are summarised in Tab. 4. The results with EN SGNS-LARGE vectors are shown in Fig. 3a. Morphfitted vectors bring consistent improvement across all experiments, regardless of the quality of the initial distributional space. This finding confirms that the method is robust: its effectiveness does not depend on the architecture used to construct the initial space. To illustrate the improvements, note that the best score on SimVerb for a model trained on running text is achieved by Context2vec (ρ = 0.388); injecting morphological constraints into this vector space results in a gain of 7.1 ρ points. Experiments on Other Languages We next extend our experiments to other languages, testing both morph-fitting variants. The results are summarised in Tab. 5, while Fig. 3a-3d show results for the morph-fitted SGNS-LARGE vectors. These scores confirm the effectiveness and robustness of morph-fitting across languages, suggesting that the idea of fitting to morphological constraints is indeed language-agnostic, given the set of languagespecific rule-based constraints. Fig. 3 also demon7Unlike other gold standard resources such as WordSim353 (Finkelstein et al., 2002) or MEN (Bruni et al., 2014), SimLex and SimVerb provided explicit guidelines to discern between semantic similarity and association, so that related but non-similar words (e.g. cup and coffee) have a low rating. 8Since Leviant and Reichart (2015) re-scored the original EN SimLex, we use their EN SimLex version for consistency. 60 Evaluation Vectors SimLex-999 SimVerb-3500 1. SG-BOW2-PW (300) (Mikolov et al., 2013) .339 →.439 .277 →.381 2. GloVe-6B (300) (Pennington et al., 2014) .324 →.438 .286 →.405 3. Count-SVD (500) (Baroni et al., 2014) .267 →.360 .199 →.301 4. SG-DEPS-PW (300) (Levy and Goldberg, 2014) .376 →.434 .313 →.418 5. SG-DEPS-8B (500) (Bansal et al., 2014) .373 →.441 .356 →.473 6. MultiCCA-EN (512) (Faruqui and Dyer, 2014) .314 →.391 .296 →.354 7. BiSkip-EN (256) (Luong et al., 2015) .276 →.356 .260 →.333 8. SG-BOW2-8B (500) (Schwartz et al., 2015) .373 →.440 .348 →.441 9. SymPat-Emb (500) (Schwartz et al., 2016) .381 →.442 .284 →.373 10. Context2Vec (600) (Melamud et al., 2016) .371 →.440 .388 →.459 Table 4: The impact of morph-fitting (MFIT-AR used) on a representative set of EN vector space models. All results show the Spearman’s ρ correlation before and after morph-fitting. The numbers in parentheses refer to the vector dimensionality. Vectors Distrib. MFIT-A MFIT-AR EN: GloVe-6B (300) .324 .376 .438 EN: SG-BOW2-PW (300) .339 .385 .439 DE: SG-DEPS-PW (300) (Vuli´c and Korhonen, 2016a) .267 .318 .325 DE: BiSkip-DE (256) (Luong et al., 2015) .354 .414 .421 IT: SG-DEPS-PW (300) (Vuli´c and Korhonen, 2016a) .237 .351 .391 IT: CBOW5-Wacky (300) (Dinu et al., 2015) .363 .417 .446 Table 5: Results on multilingual SimLex-999 (EN, DE, and IT) with two morph-fitting variants. strates that the morph-fitted vector spaces consistently outperform the morph-fixed ones. The comparison between MFIT-A and MFITAR indicates that both sets of constraints are important for the fine-tuning process. MFIT-A yields consistent gains over the initial spaces, and (consistent) further improvements are achieved by also incorporating the antonymous REPEL constraints. This demonstrates that both types of constraints are useful for semantic specialisation. Comparison to Other Specialisation Methods We also tried using other post-processing specialisation models from the literature in lieu of ATTRACT-REPEL using the same set of “morphological” synonymy and antonymy constraints. We compare ATTRACT-REPEL to the retrofitting model en:GloVe en:BOW2 de:DEPS de:BiSkip it:DEPS it:CBOW5 Word Vector Space 0.20 0.25 0.30 0.35 0.40 0.45 Spearman’s ρ correlation score Distrib RF CF MFit-AR Figure 2: A comparison of morph-fitting (the MFITAR variant) with two other standard specialisation approaches using the same set of morphological constraints: Retrofitting (RF) (Faruqui et al., 2015) and Counter-fitting (CF) (Mrkši´c et al., 2016). Spearman’s ρ correlation scores on the multilingual SimLex-999 dataset for the same six distributional spaces from Tab. 5. of (Faruqui et al., 2015) and counter-fitting (Mrkši´c et al., 2017a). The two baselines were trained for 20 iterations using suggested settings. The results for EN, DE, and IT are summarised in Fig. 2. They clearly indicate that MFIT-AR outperforms the two other post-processors for each language. We hypothesise that the difference in performance mainly stems from context-sensitive vector space updates performed by ATTRACT-REPEL. Conversely, the other two models perform pairwise updates which do not consider what effect each update has on the example pair’s relation to other word vectors (for a detailed comparison, see (Mrkši´c et al., 2017b)). Besides their lower performance, the two other specialisation models have additional disadvantages compared to the proposed morph-fitting model. First, retrofitting is able to incorporate only synonymy/ATTRACT pairs, while our results demonstrate the usefulness of both types of constraints, both for intrinsic evaluation (Tab. 5) and downstream tasks (see later Fig. 3). Second, counter-fitting is computationally intractable with SGNS-LARGE vectors, as its regularisation term involves the computation of all pairwise distances between words in the vocabulary. Further Discussion The simplicity of the used language-specific rules does come at a cost of occasionally generating incorrect linguistic constraints such as (tent, intent), (prove, improve) or (press, impress). In future work, we will study how to fur61 ther refine extracted sets of constraints. We also plan to conduct experiments with gold standard morphological lexicons on languages for which such resources exist (Sylak-Glassman et al., 2015; Cotterell et al., 2016b), and investigate approaches which learn morphological inflections and derivations in different languages automatically as another potential source of morphological constraints (Soricut and Och, 2015; Cotterell et al., 2016a; Faruqui et al., 2016; Kann et al., 2017; Aharoni and Goldberg, 2017, i.a.). 5 Downstream Task: Dialogue State Tracking (DST) Goal-oriented dialogue systems provide conversational interfaces for tasks such as booking flights or finding restaurants. In slot-based systems, application domains are specified using ontologies that define the search constraints which users can express. An ontology consists of a number of slots and their assorted slot values. In a restaurant search domain, sets of slot-values could include PRICE = [cheap, expensive] or FOOD = [Thai, Indian, ...]. The DST model is the first component of modern dialogue pipelines (Young, 2010). It serves to capture the intents expressed by the user at each dialogue turn and update the belief state. This probability distribution over the possible dialogue states (defined by the domain ontology) is the system’s internal estimate of the user’s goals. It is used by the downstream dialogue manager component to choose the subsequent system response (Su et al., 2016). The following example shows the true dialogue state in a multi-turn dialogue: User: What’s good in the southern part of town? inform(area=south) System: Vedanta is the top-rated Indian place. User: How about something cheaper? inform(area=south, price=cheap) System: Seven Days is very popular. Great hot pot. User: What’s the address? inform(area=south, price=cheap); request(address) System: Seven Days is at 66 Regent Street. The Dialogue State Tracking Challenge (DSTC) shared task series formalised the evaluation and provided labelled DST datasets (Henderson et al., 2014a,b; Williams et al., 2016). While a plethora of DST models are available based on, e.g., handcrafted rules (Wang et al., 2014) or conditional random fields (Lee and Eskenazi, 2013), the recent DST methodology has seen a shift towards neuralnetwork architectures (Henderson et al., 2014c,d; Zilka and Jurcicek, 2015; Mrkši´c et al., 2015; Perez and Liu, 2017; Liu and Perez, 2017; Vodolán et al., 2017; Mrkši´c et al., 2017a, i.a.). Model: Neural Belief Tracker To detect intents in user utterances, most existing models rely on either (or both): 1) Spoken Language Understanding models which require large amounts of annotated training data; or 2) hand-crafted, domain-specific lexicons which try to capture lexical and morphological variation. The Neural Belief Tracker (NBT) is a novel DST model which overcomes both issues by reasoning purely over pre-trained word vectors (Mrkši´c et al., 2017a). The NBT learns to compose these vectors into intermediate utterance and context representations. These are then used to decide which of the ontology-defined intents (goals) have been expressed by the user. The NBT model keeps word vectors fixed during training, so that unseen, yet related words can be mapped to the right intent at test time (e.g. northern to north). Data: Multilingual WOZ 2.0 Dataset Our DST evaluation is based on the WOZ dataset, released by Wen et al. (2017). In this Wizard-of-Oz setup, two Amazon Mechanical Turk workers assumed the role of the user and the system asking/providing information about restaurants in Cambridge (operating over the same ontology and database used for DSTC2 (Henderson et al., 2014a)). Users typed instead of speaking, removing the need to deal with noisy speech recognition. In DSTC datasets, users would quickly adapt to the system’s inability to deal with complex queries. Conversely, the WOZ setup allowed them to use sophisticated language. The WOZ 2.0 release expanded the dataset to 1,200 dialogues (Mrkši´c et al., 2017a). In this work, we use translations of this dataset to Italian and German, released by Mrkši´c et al. (2017b). Evaluation Setup The principal metric we use to measure DST performance is the joint goal accuracy, which represents the proportion of test set dialogue turns where all user goals expressed up to that point of the dialogue were decoded correctly (Henderson et al., 2014a). The NBT models for EN, DE and IT are trained using four variants of the SGNS-LARGE vectors: 1) the initial distributional vectors; 2) morph-fixed vectors; 3) and 4) the two variants of morph-fitted vectors (see Sect. 3). As shown by Mrkši´c et al. (2017b), semantic specialisation of the employed word vectors ben62 Distrib MFix MFit-A MFit-AR 0.15 0.20 0.25 0.30 0.35 0.40 0.45 SimLex (Spearman’s ρ) 0.60 0.65 0.70 0.75 0.80 0.85 DST Performance (Joint) (a) English Distrib MFix MFit-A MFit-AR 0.15 0.20 0.25 0.30 0.35 0.40 0.45 SimLex (Spearman’s ρ) SimLex 0.60 0.65 0.70 0.75 0.80 0.85 DST Performance (Joint) DST (b) German Distrib MFix MFit-A MFit-AR 0.15 0.20 0.25 0.30 0.35 0.40 0.45 SimLex (Spearman’s ρ) 0.60 0.65 0.70 0.75 0.80 0.85 DST Performance (Joint) (c) Italian Distrib MFix MFit-A MFit-AR RU Word Vector Space 0.15 0.20 0.25 0.30 0.35 0.40 0.45 SimLex (Spearman’s ρ) SimLex (d) Russian Figure 3: An overview of the results (Spearman’s ρ correlation) for four languages on SimLex-999 (grey bars, left y axis) and the downstream DST performance (dark bars, right y axis) using SGNS-LARGE vectors (d = 300), see Tab. 3 and Sect. 3. The left y axis measures the intrinsic word similarity performance, while the right y axis provides the scale for the DST performance (there are no DST datasets for Russian). efits DST performance across all three languages. However, large gains on SimLex-999 do not always induce correspondingly large gains in downstream performance. In our experiments, we investigate the extent to which morph-fitting improves DST performance, and whether these gains exhibit stronger correlation with intrinsic performance. Results and Discussion The dark bars (against the right axes) in Fig. 3 show the DST performance of NBT models making use of the four vector collections. IT and DE benefit from both kinds of morph-fitting: IT performance increases from 74.1 →78.1 (MFIT-A) and DE performance rises even more: 60.6 →66.3 (MFIT-AR), setting a new state-of-the-art score for both datasets. The morph-fixed vectors do not enhance DST performance, probably because fixing word vectors to their highest frequency inflectional form eliminates useful semantic content encoded in the original vectors. On the other hand, morph-fitting makes use of this information, supplementing it with semantic relations between different morphological forms. These conclusions are in line with the SimLex gains, where morph-fitting outperforms both distributional and morph-fixed vectors. English performance shows little variation across the four word vector collections investigated here. This corroborates our intuition that, as a morphologically simpler language, English stands to gain less from fine-tuning the morphological variation for downstream applications. This result again points at the discrepancy between intrinsic and extrinsic evaluation: the considerable gains in SimLex performance do not necessarily induce similar gains in downstream performance. Additional discrepancies between SimLex and downstream DST performance are detected for German and Italian. While we observe a slight drop in SimLex performance with the DE MFIT-AR vectors compared to the MFIT-A ones, their relative performance is reversed in the DST task. On the other hand, we see the opposite trend in Italian, where the MFITA vectors score lower than the MFIT-AR vectors on SimLex, but higher on the DST task. In summary, we believe these results show that SimLex is not a perfect proxy for downstream performance in language understanding tasks. Regardless, its performance does correlate with downstream performance to a large extent, providing a useful indicator for the usefulness of specific word vector 63 spaces for extrinsic tasks such as DST. 6 Related Work Semantic Specialisation A standard approach to incorporating external information into vector spaces is to pull the representations of similar words closer together. Some models integrate such constraints into the training procedure, modifying the prior or the regularisation (Yu and Dredze, 2014; Xu et al., 2014; Bian et al., 2014; Kiela et al., 2015), or using a variant of the SGNS-style objective (Liu et al., 2015; Osborne et al., 2016). Another class of models, popularly termed retrofitting, injects lexical knowledge from available semantic databases (e.g., WordNet, PPDB) into pre-trained word vectors (Faruqui et al., 2015; Jauhar et al., 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkši´c et al., 2016). Morph-fitting falls into the latter category. However, instead of resorting to curated knowledge bases, and experimenting solely with English, we show that the morphological richness of any language can be exploited as a source of inexpensive supervision for fine-tuning vector spaces, at the same time specialising them to better reflect true semantic similarity, and learning more accurate representations for low-frequency words. Word Vectors and Morphology The use of morphological resources to improve the representations of morphemes and words is an active area of research. The majority of proposed architectures encode morphological information, provided either as gold standard morphological resources (SylakGlassman et al., 2015) such as CELEX (Baayen et al., 1995) or as an external analyser such as Morfessor (Creutz and Lagus, 2007), along with distributional information jointly at training time in the language modelling (LM) objective (Luong et al., 2013; Botha and Blunsom, 2014; Qiu et al., 2014; Cotterell and Schütze, 2015; Bhatia et al., 2016, i.a.). The key idea is to learn a morphological composition function (Lazaridou et al., 2013; Cotterell and Schütze, 2017) which synthesises the representation of a word given the representations of its constituent morphemes. Contrary to our work, these models typically coalesce all lexical relations. Another class of models, operating at the character level, shares a similar methodology: such models compose token-level representations from subcomponent embeddings (subwords, morphemes, or characters) (dos Santos and Zadrozny, 2014; Ling et al., 2015; Cao and Rei, 2016; Kim et al., 2016; Wieting et al., 2016; Verwimp et al., 2017, i.a.). In contrast to prior work, our model decouples the use of morphological information, now provided in the form of inflectional and derivational rules transformed into constraints, from the actual training. This pipelined approach results in a simpler, more portable model. In spirit, our work is similar to Cotterell et al. (2016b), who formulate the idea of post-training specialisation in a generative Bayesian framework. Their work uses gold morphological lexicons; we show that competitive performance can be achieved using a non-exhaustive set of simple rules. Our framework facilitates the inclusion of antonyms at no extra cost and naturally extends to constraints from other sources (e.g., WordNet) in future work. Another practical difference is that we focus on similarity and evaluate morph-fitting in a well-defined downstream task where the artefacts of the distributional hypothesis are known to prompt statistical system failures. 7 Conclusion and Future Work We have presented a novel morph-fitting method which injects morphological knowledge in the form of linguistic constraints into word vector spaces. The method makes use of implicit semantic signals encoded in inflectional and derivational rules which describe the morphological processes in a language. The results in intrinsic word similarity tasks show that morph-fitting improves vector spaces induced by distributional models across four languages. Finally, we have shown that the use of morph-fitted vectors boosts the performance of downstream language understanding models which rely on word representations as features, especially for morphologically rich languages such as German. Future work will focus on other potential sources of morphological knowledge, porting the framework to other morphologically rich languages and downstream tasks, and on further refinements of the post-processing specialisation algorithm and the constraint selection. Acknowledgments This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). RR is supported by the IntelICRI grant: Hybrid Models for Minimally Supervised Information Extraction from Conversations. The authors are grateful to the anonymous reviewers for their helpful suggestions. 64 References Roee Aharoni and Yoav Goldberg. 2017. Morphological inflection generation with hard monotonic attention. In Proceedings of ACL. https://arxiv.org/abs/1611.01487. Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual NLP. In Proceedings of CoNLL. pages 183–192. http://www.aclweb.org/anthology/W133520. Eleftherios Avramidis and Philipp Koehn. 2008. Enriching morphologically poor languages for statistical machine translation. In Proceedings of ACL. pages 763–770. http://www.aclweb.org/anthology/P/P08/P08-1087. Harald R. Baayen, Richard Piepenbrock, and Hedderik van Rijn. 1995. The CELEX lexical data base on CD-ROM . Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for dependency parsing. In Proceedings of ACL. pages 809– 815. http://www.aclweb.org/anthology/P14-2131. Marco Baroni, Georgiana Dinu, and Germán Kruszewski. 2014. Don’t count, predict! A systematic comparison of contextcounting vs. context-predicting semantic vectors. In Proceedings of ACL. pages 238–247. http://www.aclweb.org/anthology/P14-1023. Parminder Bhatia, Robert Guthrie, and Jacob Eisenstein. 2016. Morphological priors for probabilistic neural word embeddings. In Proceedings of EMNLP. pages 490–500. https://aclweb.org/anthology/D16-1047. Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Knowledge-powered deep learning for word embedding. In Proceedings of ECML-PKDD. pages 132– 148. https://doi.org/10.1007/978-3-662-44848-9_9. Jan A. Botha and Phil Blunsom. 2014. Compositional morphology for word representations and language modelling. In Proceedings of ICML. pages 1899–1907. http://jmlr.org/proceedings/papers/v32/botha14.html. Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research 49:1–47. https://doi.org/10.1613/jair.4135. Kris Cao and Marek Rei. 2016. A joint model for word embedding and word morphology. In Proceedings of the 1st Workshop on Representation Learning for NLP. pages 18–26. http://aclweb.org/anthology/W/W16/W16-1603. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP. pages 740–750. http://www.aclweb.org/anthology/D14-1082. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493–2537. http://dl.acm.org/citation.cfm?id=1953048.2078186. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016a. The sigmorphon 2016 shared task - morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. pages 10–22. http://anthology.aclweb.org/W16-2002. Ryan Cotterell and Hinrich Schütze. 2015. Morphological word-embeddings. In Proceedings of NAACL-HLT. pages 1287–1292. http://www.aclweb.org/anthology/N15-1140. Ryan Cotterell and Hinrich Schütze. 2017. Joint semantic synthesis and morphological analysis of the derived word. Transactions of the ACL https://arxiv.org/abs/1701.00946. Ryan Cotterell, Hinrich Schütze, and Jason Eisner. 2016b. Morphological smoothing and extrapolation of word embeddings. In Proceedings of ACL. pages 1651–1660. http://www.aclweb.org/anthology/P161156. Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. TSLP 4(1):3:1–3:34. http://doi.acm.org/10.1145/1217098.1217101. James Curran. 2004. From Distributional to Semantic Similarity. Ph.D. thesis, School of Informatics, University of Edinburgh. http://hdl.handle.net/1842/563. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of ICLR (Workshop Papers). http://arxiv.org/abs/1412.6568. Cícero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of ICML. pages 1818–1826. http://jmlr.org/proceedings/papers/v32/santos14.html. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12:2121–2159. http://dl.acm.org/citation.cfm?id=2021068. Maud Ehrmann, Francesco Cecconi, Daniele Vannella, John Philip Mccrae, Philipp Cimiano, and Roberto Navigli. 2014. Representing multilingual data as linked data: The case of BabelNet 2.0. In Proceedings of LREC. pages 401–408. http://www.lrecconf.org/proceedings/lrec2014/summaries/810.html. 65 Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL-HLT. pages 1606– 1615. http://www.aclweb.org/anthology/N15-1184. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of EACL. pages 462– 471. http://www.aclweb.org/anthology/E14-1049. Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In Proceedings of NAACL-HLT. pages 634–643. http://www.aclweb.org/anthology/N16-1077. Christiane Fellbaum. 1998. WordNet. https://mitpress.mit.edu/books/wordnet. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Information Systems 20(1):116–131. https://doi.org/10.1145/503104.503110. Victoria Fromkin, Robert Rodman, and Nina Hyams. 2013. An Introduction to Language, 10th Edition. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of NAACL-HLT. pages 758–764. http://www.aclweb.org/anthology/N131092. Daniela Gerz, Ivan Vuli´c, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb3500: A large-scale evaluation set of verb similarity. In Proceedings of EMNLP. pages 2173–2182. https://aclweb.org/anthology/D16-1235. Zellig S. Harris. 1954. Distributional structure. Word 10(23):146–162. Martin Haspelmath and Andrea Sims. 2013. Understanding morphology. Matthew Henderson, Blaise Thomson, and Jason D. Wiliams. 2014a. The Second Dialog State Tracking Challenge. In Proceedings of SIGDIAL. pages 263– 272. http://aclweb.org/anthology/W/W14/W144337.pdf. Matthew Henderson, Blaise Thomson, and Jason D. Wiliams. 2014b. The Third Dialog State Tracking Challenge. In Proceedings of IEEE SLT. pages 324– 329. https://doi.org/10.1109/SLT.2014.7078595. Matthew Henderson, Blaise Thomson, and Steve Young. 2014c. Robust dialog state tracking using delexicalised recurrent neural networks and unsupervised adaptation. In Proceedings of IEEE SLT. pages 360–365. Matthew Henderson, Blaise Thomson, and Steve Young. 2014d. Word-based dialog state tracking with recurrent neural networks. In Proceedings of SIGDIAL. pages 292–299. http://aclweb.org/anthology/W/W14/W144340.pdf. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics 41(4):665–695. https://doi.org/10.1162/COLI_a_00237. Sujay Kumar Jauhar, Chris Dyer, and Eduard H. Hovy. 2015. Ontologically grounded multi-sense representation learning for semantic vector space models. In Proceedings of NAACL. pages 683–693. http://www.aclweb.org/anthology/N15-1070. Anders Johannsen, Héctor Martínez Alonso, and Anders Søgaard. 2015. Any-language frame-semantic parsing. In Proceedings of EMNLP. pages 2062– 2066. http://aclweb.org/anthology/D15-1245. Katharina Kann, Ryan Cotterell, and Hinrich Schütze. 2017. Neural multi-source morphological reinflection. In Proceedings of EACL. pages 514–524. http://www.aclweb.org/anthology/E17-1049. Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or relatedness. In Proceedings of EMNLP. pages 2044– 2048. http://aclweb.org/anthology/D15-1242. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of AAAI. pages 2741– 2749. Angeliki Lazaridou, Marco Marelli, Roberto Zamparelli, and Marco Baroni. 2013. Compositionally derived representations of morphologically complex words in distributional semantics. In Proceedings of ACL. pages 1517–1526. http://www.aclweb.org/anthology/P13-1149. Sungjin Lee and Maxine Eskenazi. 2013. Recipe for building robust spoken dialog state trackers: Dialog State Tracking Challenge system description. In Proceedings of SIGDIAL. pages 414– 422. http://aclweb.org/anthology/W/W13/W134066.pdf. Ira Leviant and Roi Reichart. 2015. Separated by an un-common language: Towards judgment language informed vector space modeling. CoRR abs/1508.00106. http://arxiv.org/abs/1508.00106. Omer Levy and Yoav Goldberg. 2014. Dependency-based word embeddings. In Proceedings of ACL. pages 302–308. http://www.aclweb.org/anthology/P14-2050. Wang Ling, Chris Dyer, Alan W. Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: 66 Compositional character models for open vocabulary word representation. In Proceedings of EMNLP. pages 1520–1530. http://aclweb.org/anthology/D151176. Fei Liu and Julien Perez. 2017. Gated end-to-end memory networks. In Proceedings of EACL. pages 1–10. http://www.aclweb.org/anthology/E17-1001. Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In Proceedings of ACL. pages 1501–1511. http://www.aclweb.org/anthology/P15-1145. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing. pages 151–159. http://www.aclweb.org/anthology/W15-1521. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of CoNLL. pages 104–113. http://www.aclweb.org/anthology/W13-3512. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. Context2vec: Learning generic context embedding with bidirectional LSTM. In Proceedings of CoNLL. pages 51–61. http://aclweb.org/anthology/K/K16/K16-1006.pdf. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS. pages 3111–3119. Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Proceedings of NIPS. pages 2265– 2273. Nikola Mrkši´c, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gaši´c, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multidomain dialog state tracking using recurrent neural networks. In Proceedings of ACL. pages 794–799. http://aclweb.org/anthology/P/P15/P15-2130.pdf. Nikola Mrkši´c, Diarmuid Ó Séaghdha, Blaise Thomson, Tsung-Hsien Wen, and Steve Young. 2017a. Neural Belief Tracker: Data-driven dialogue state tracking. In Proceedings of ACL. http://arxiv.org/abs/1606.03777. Nikola Mrkši´c, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gaši´c, Lina Maria Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of NAACLHLT. http://aclweb.org/anthology/N/N16/N161018.pdf. Nikola Mrkši´c, Ivan Vuli´c, Diarmuid Ó Séaghdha, Roi Reichart, Milica Gaši´c, Anna Korhonen, and Steve Young. 2017b. Semantic Specialisation of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints. arXiv. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted Boltzmann machines. In Proceedings of ICML. pages 807–814. http://www.icml2010.org/papers/432.pdf. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence 193:217–250. https://doi.org/10.1016/j.artint.2012.07.001. Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonymsynonym distinction. In Proceedings of ACL. pages 454–459. http://anthology.aclweb.org/P16-2074. Dominique Osborne, Shashi Narayan, and Shay Cohen. 2016. Encoding prior knowledge with eigenword embeddings. Transactions of the ACL 4:417–430. https://arxiv.org/abs/1509.01007. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification. In Proceedings of ACL. pages 425–430. http://www.aclweb.org/anthology/P152070. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. pages 1532– 1543. http://www.aclweb.org/anthology/D14-1162. Julien Perez and Fei Liu. 2017. Dialog state tracking, a machine reading approach using Memory Network. In Proceedings of EACL. pages 305–314. http://www.aclweb.org/anthology/E17-1029. Siyu Qiu, Qing Cui, Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Co-learning of word representations and morpheme representations. In Proceedings of COLING. pages 141–150. http://www.aclweb.org/anthology/C14-1015. Patrick Schone and Daniel Jurafsky. 2001. Knowledge-free induction of inflectional morphologies. In Proceedings of NAACL. http://aclweb.org/anthology/N/N01/N01-1024. Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for improved word similarity prediction. In Proceedings of CoNLL. pages 258–267. http://www.aclweb.org/anthology/K15-1026. Roy Schwartz, Roi Reichart, and Ari Rappoport. 2016. Symmetric patterns and coordinations: Fast and enhanced representations of verbs and adjectives. 67 In Proceedings of NAACL-HLT. pages 499–505. http://www.aclweb.org/anthology/N16-1060. Radu Soricut and Franz Och. 2015. Unsupervised morphology induction using word embeddings. In Proceedings of NAACL-HLT. pages 1627–1637. http://www.aclweb.org/anthology/N15-1186. Pei-Hao Su, Milica Gaši´c, Nikola Mrkši´c, Lina RojasBarahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. ???? Continuously learning neural dialogue management. Pei-Hao Su, Milica Gaši´c, Nikola Mrkši´c, Lina RojasBarahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. 2016. On-line active reward learning for policy optimisation in spoken dialogue systems. In Proceedings of ACL. pages 2431–2441. http://www.aclweb.org/anthology/P161230. John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015. A languageindependent feature schema for inflectional morphology. In Proceedings of ACL. pages 674–680. http://www.aclweb.org/anthology/P15-2111. Reut Tsarfaty, Djamé Seddah, Yoav Goldberg, Sandra Kuebler, Yannick Versley, Marie Candito, Jennifer Foster, Ines Rehbein, and Lamia Tounsi. 2010. Statistical parsing of morphologically rich languages (SPMRL) What, how and whither. In Proceedings of the NAACL Workshop on Statistical Parsing of Morphologically-Rich Languages. pages 1– 12. http://www.aclweb.org/anthology/W10-1401. Joseph P. Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of ACL. pages 384–394. http://www.aclweb.org/anthology/P10-1040. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: vector space models of semantics. Journal of Artifical Intelligence Research 37(1):141–188. https://doi.org/10.1613/jair.2934. Lyan Verwimp, Joris Pelemans, Hugo Van hamme, and Patrick Wambacq. 2017. Character-word LSTM language models. In Proceedings of EACL. pages 417– 427. http://www.aclweb.org/anthology/E17-1040. Miroslav Vodolán, Rudolf Kadlec, and Jan Kleindienst. 2017. Hybrid dialog state tracker with ASR features. In Proceedings of EACL. pages 205–210. http://www.aclweb.org/anthology/E17-2033. Ivan Vuli´c and Anna Korhonen. 2016a. Is "universal syntax" universally useful for learning distributed word representations? In Proceedings of ACL. pages 518–524. http://anthology.aclweb.org/P16-2084. Ivan Vuli´c and Anna Korhonen. 2016b. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of ACL. pages 247–257. http://www.aclweb.org/anthology/P16-1024. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of AAAI. pages 1112–1119. Tsung-Hsien Wen, David Vandyke, Nikola Mrkši´c, Milica Gaši´c, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In Proceedings of EACL. http://www.aclweb.org/anthology/E17-1042. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. Transactions of the ACL 3:345–358. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding words and sentences via character n-grams. In Proceedings of EMNLP. pages 1504–1515. https://aclweb.org/anthology/D16-1157. Jason D. Williams, Antoine Raux, and Matthew Henderson. 2016. The Dialog State Tracking Challenge series: A review. Dialogue & Discourse 7(3):4–33. http://dad.unibielefeld.de/index.php/dad/article/view/3685. Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. RC-NET: A general framework for incorporating knowledge into word representations. In Proceedings of CIKM. pages 1219–1228. https://doi.org/10.1145/2661829.2662038. Steve Young. 2010. Cognitive User Interfaces. IEEE Signal Processing Magazine . Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of ACL. pages 545–550. http://www.aclweb.org/anthology/P14-2089. Britta Zeller, Jan Šnajder, and Sebastian Padó. 2013. DErivBase: Inducing and evaluating a derivational morphology resource for German. In Proceedings of ACL. pages 1201–1211. http://www.aclweb.org/anthology/P13-1118. Lukas Zilka and Filip Jurcicek. 2015. Incremental LSTM-based dialog state tracker. In Proceedings of ASRU. Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of EMNLP. pages 1393–1398. http://www.aclweb.org/anthology/D13-1141. 68
2017
6
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 643–653 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1060 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 643–653 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1060 Domain Attention with an Ensemble of Experts Young-Bum Kim† Karl Stratos‡ Dongchan Kim† †Microsoft AI and Research ‡Bloomberg L. P. {ybkim, dongchan.kim}@microsoft.com [email protected] Abstract An important problem in domain adaptation is to quickly generalize to a new domain with limited supervision given K existing domains. One approach is to retrain a global model across all K + 1 domains using standard techniques, for instance Daum´e III (2009). However, it is desirable to adapt without having to reestimate a global model from scratch each time a new domain with potentially new intents and slots is added. We describe a solution based on attending an ensemble of domain experts. We assume K domainspecific intent and slot models trained on respective domains. When given domain K + 1, our model uses a weighted combination of the K domain experts’ feedback along with its own opinion to make predictions on the new domain. In experiments, the model significantly outperforms baselines that do not use domain adaptation and also performs better than the full retraining approach. 1 Introduction An important problem in domain adaptation is to quickly generalize to a new domain with limited supervision given K existing domains. In spoken language understanding, new domains of interest for categorizing user utterances are added on a regular basis1. For instance, we may 1A scenario frequently arising in practice is having a request for creating a new virtual domain targeting a specific application. One typical use case is that of building natural language capability through intent and slot modeling (without actually building a domain classifier) targeting a specific application. add ORDERPIZZA domain and desire a domainspecific intent and semantic slot tagger with a limited amount of training data. Training only on the target domain fails to utilize the existing resources in other domains that are relevant (e.g., labeled data for PLACES domain with place name, location as the slot types), but naively training on the union of all domains does not work well since different domains can have widely varying distributions. Domain adaptation offers a balance between these extremes by using all data but simultaneously distinguishing domain types. A common approach for adapting to a new domain is to retrain a global model across all K + 1 domains using well-known techniques, for example the feature augmentation method of Daum´e III (2009) which trains a single model that has one domaininvariant component along with K + 1 domainspecific components each of which is specialized in a particular domain. While such a global model is effective, it requires re-estimating a model from scratch on all K + 1 domains each time a new domain is added. This is burdensome particularly in our scenario in which new domains can arise frequently. In this paper, we present an alternative solution based on attending an ensemble of domain experts. We assume that we have already trained K domain-specific models on respective domains. Given a new domain K +1 with a small amount of training data, we train a model on that data alone but queries the K experts as part of the training procedure. We compute an attention weight for each of these experts and use their combined feedback along with the model’s own opinion to make predictions. This way, the model is able to selectively capitalize on relevant domains much like in 643 standard domain adaptation but without explicitly re-training on all domains together. In experiments, we show clear gains in a domain adaptation scenario across 7 test domains, yielding average error reductions of 44.97% for intent classification and 32.30% for slot tagging compared to baselines that do not use domain adaptation. Moreover we have higher accuracy than the full re-training approach of Kim et al. (2016c), a neural analog of Daum´e III (2009). 2 Related Work 2.1 Domain Adaptation There is a venerable history of research on domain adaptation (Daume III and Marcu, 2006; Daum´e III, 2009; Blitzer et al., 2006, 2007; Pan et al., 2011) which is concerned with the shift in data distribution from one domain to another. In the context of NLP, a particularly successful approach is the feature augmentation method of Daum´e III (2009) whose key insight is that if we partition the model parameters to those that handle common patterns and those that handle domainspecific patterns, the model is forced to learn from all domains yet preserve domain-specific knowledge. The method is generalized to the neural paradigm by Kim et al. (2016c) who jointly use a domain-specific LSTM and also a global LSTM shared across all domains. In the context of SLU, Jaech et al. (2016) proposed K domain-specific feedforward layers with a shared word-level LSTM layer across domains; Kim et al. (2016c) instead employed K + 1 LSTMs. Hakkani-T¨ur et al. (2016) proposed to employ a sequence-to-sequence model by introducing a fictitious symbol at the end of an utterance of which tag represents the corresponding domain and intent. All these methods require one to re-train a model from scratch to make it learn the correlation and invariance between domains. This becomes difficult to scale when there is a new domain coming in at high frequency. We address this problem by proposing a method that only calls K trained domain experts; we do not have to re-train these domain experts. This gives a clear computational advantage over the feature augmentation method. 2.2 Spoken Language Understanding Recently, there has been much investment on the personal digital assistant (PDA) technology in industry (Sarikaya, 2015; Sarikaya et al., 2016). Apples Siri, Google Now, Microsofts Cortana, and Amazons Alexa are some examples of personal digital assistants. Spoken language understanding (SLU) is an important component of these examples that allows natural communication between the user and the agent (Tur, 2006; El-Kahky et al., 2014). PDAs support a number of scenarios including creating reminders, setting up alarms, note taking, scheduling meetings, finding and consuming entertainment (i.e. movie, music, games), finding places of interest and getting driving directions to them (Kim et al., 2016a). Naturally, there has been an extensive line of prior studies for domain scaling problems to easily scale to a larger number of domains: pretraining (Kim et al., 2015c), transfer learning (Kim et al., 2015d), constrained decoding with a single model (Kim et al., 2016a), multi-task learning (Jaech et al., 2016), neural domain adaptation (Kim et al., 2016c), domainless adaptation (Kim et al., 2016b), a sequence-to-sequence model (Hakkani-T¨ur et al., 2016), adversary domain training (Kim et al., 2017) and zero-shot learning(Chen et al., 2016; Ferreira et al., 2015). There are also a line of prior works on enhancing model capability and features: jointly modeling intent and slot predictions (Jeong and Lee, 2008; Xu and Sarikaya, 2013; Guo et al., 2014; Zhang and Wang, 2016; Liu and Lane, 2016a,b), modeling SLU models with web search click logs (Li et al., 2009; Kim et al., 2015a) and enhancing features, including representations (Anastasakos et al., 2014; Sarikaya et al., 2014; Celikyilmaz et al., 2016, 2010; Kim et al., 2016d) and lexicon (Liu and Sarikaya, 2014; Kim et al., 2015b). 3 Method We use an LSTM simply as a mapping φ : Rd × Rd′ →Rd′ that takes an input vector x and a state vector h to output a new state vector h′ = φ(x, h). See Hochreiter and Schmidhuber (1997) for a detailed description. At a high level, the individual model consists of builds on several ingredients shown in Figure 1: character and word embedding, a bidirectional LSTM (BiLSTM) at a character layer, a BiLSTM at word level, and feedfoward network at the output. 644 … 𝑔1 𝑡 𝑦1 𝑡 Feedforward 𝑔2 𝑡 𝑦2 𝑡 𝑦𝑛𝑡 𝑔𝑛𝑡 𝑔𝑖 Character embedding Word embedding … Utterance 𝑤1 … 𝑐1,1 𝑐1,2 𝑐1,𝑚 𝑤2 … 𝑐2,1 𝑐2,2 𝑐2,𝑚 𝑤𝑛 … 𝑐𝑛,1 𝑐𝑛,2 𝑐𝑛,𝑚 … Char-level Bidirectional LSTM … 𝜙𝑓 𝑐 𝜙𝑏 𝑐 𝜙𝑓 𝑐 𝜙𝑏 𝑐 𝜙𝑓 𝑐 𝜙𝑏 𝑐 … 𝜙𝑓 𝑐 𝜙𝑏 𝑐 𝜙𝑓 𝑐 𝜙𝑏 𝑐 𝜙𝑓 𝑐 𝜙𝑏 𝑐 … 𝜙𝑓 𝑐 𝜙𝑏 𝑐 𝜙𝑓 𝑐 𝜙𝑏 𝑐 𝜙𝑓 𝑐 𝜙𝑏 𝑐 𝑦𝑖 𝜙𝑓 𝑤 𝜙𝑏 𝑤 Word-level Bidirectional LSTM … 𝜙𝑓 𝑤 𝜙𝑏 𝑤 𝜙𝑓 𝑤 𝜙𝑏 𝑤 𝜙𝑓 𝑖 𝜙𝑏 𝑖 Figure 1: The overall network architecture of the individual model. 3.1 Individual Model Architecture Let C denote the set of character types and W the set of word types. Let ⊕denote the vector concatenation operation. A wildly successful architecture for encoding a sentence (w1 . . . wn) ∈ Wn is given by bidirectional LSTMs (BiLSTMs) (Schuster and Paliwal, 1997; Graves, 2012). Our model first constructs a network over an utterance closely following Lample et al. (2016). The model parameters Θ associated with this BiLSTM layer are • Character embedding ec ∈R25 for each c ∈ C • Character LSTMs φC f, φC b : R25×R25 →R25 • Word embedding ew ∈R100 for each w ∈W • Word LSTMs φW f , φW b : R150×R100 →R100 Let w1 . . . wn ∈W denote a word sequence where word wi has character wi(j) ∈C at position j. First, the model computes a character-sensitive word representation vi ∈R150 as fC j = φC f ewi(j), fC j−1  ∀j = 1 . . . |wi| bC j = φC b ewi(j), bC j+1  ∀j = |wi| . . . 1 vi = fC |wi| ⊕bC 1 ⊕ewi for each i = 1 . . . n.2 Next, the model computes fW i = φW f vi, fW i−1  ∀i = 1 . . . n bW i = φW b vi, bW i+1  ∀i = n . . . 1 and induces a character- and context-sensitive word representation hi ∈R200 as hi = fW i ⊕bW i (1) for each i = 1 . . . n. These vectors can be used to perform intent classification or slot tagging on the utterance. Intent Classification We can predict the intent of the utterance using (h1 . . . hn) ∈R200 in (1) as follows. Let I denote the set of intent types. We introduce a single-layer feedforward network gi : R200 →R|I| whose parameters are denoted by Θi. We compute a |I|-dimensional vector µi = gi n X i=1 hi ! and define the conditional probability of the correct intent τ as p(τ|h1 . . . hn) ∝exp µi τ  (2) 2For simplicity, we assume some random initial state vectors such as f C 0 and bC |wi|+1 when we describe LSTMs. 645 The intent classification loss is given by the negative log likelihood: Li Θ, Θi = − X l log p  τ (l)|h(l) (3) where l iterates over intent-annotated utterances. Slot Tagging We predict the semantic slots of the utterance using (h1 . . . hn) ∈R200 in (1) as follows. Let S denote the set of semantic types and L the set of corresponding BIO label types 3 that is, L = {B-e : e ∈E}∪{I-e : e ∈E}∪{O}. We add a transition matrix T ∈R|L|×|L| and a singlelayer feedforward network gt : R200 →R|L| to the network; denote these additional parameters by Θt. The conditional random field (CRF) tagging layer defines a joint distribution over label sequences of y1 . . . yn ∈L of w1 . . . wn as p(y1 . . .yn|h1 . . . hn) ∝exp n X i=1 Tyi−1,yi × gt yi(hi) ! (4) The tagging loss is given by the negative log likelihood: Lt Θ, Θt = − X l log p  y(l)|h(l) (5) where l iterates over tagged sentences in the data. Alternatively, we can optimize the local loss: Lt−loc Θ, Θt = − X l X i log p  y(l) i |h(l) i  (6) where p(yi|hi) ∝exp gt yi(hi)  . 4 Method 4.1 Domain Attention Architecture Now we assume that for each of the K domains we have an individual model described in Section 3.1. Denote these domain experts by Θ(1) . . . Θ(K). We now describe our model for a new domain K + 1. Given an utterance w1 . . . wn, it uses a BiLSTM layer to induce a feature representation h1 . . . hn as specified in (1). It further invokes K domain experts Θ(1) . . . Θ(K) on this utterance to obtain the feature representations h(k) 1 . . . h(k) n for 3For example, to/O San/B-Source Francisco/I-Source airport/O. Figure 2: The overall network architecture of the domain attention, which consists of three components: (1) K domain experts + 1 target BiLSTM layer to induce a feature representation, (2) K domain experts + 1 target feedfoward layer to output pre-trained label embedding (3) a final feedforward layer to output an intent or slot. We have two separate attention mechanisms to combine feedback from domain experts. k = 1 . . . K. For each word wi, the model computes an attention weight for each domain k = 1 . . . K domains as qdot i,k = h⊤ i h(k) (7) in the simplest case. We also experiment with the bilinear function qbi i,k = h⊤ i Bh(k) (8) where B is an additional model parameter, and also the feedforward function qfeed i,k = W tanh  Uh⊤ i + V h(k) + b1 + b2 (9) where U, V, W, b1, b2 are additional model parameters. The final attention weights a(1) i . . . a(1) i are obtained by using a softmax layer ai,k = exp(qi,k) PK k=1 exp(qi,k) (10) The weighted combination of the experts’ feedback is given by h experts i = K X k=1 ai,kh(k) i (11) 646 and the model makes predictions by using ¯h1 . . . ¯hn where ¯hi = hi ⊕h experts i (12) These vectors replace the original feature vectors hi in defining the intent or tagging losses. 4.2 Domain Attention Variants We also consider two variants of the domain attention architecture in Section 4.1. Label Embedding In addition to the state vectors h(1) . . . h(K) produced by K experts, we further incorporate their final (discrete) label predictions using pre-trained label embeddings. We induce embeddings ey for labels y from all domains using the method of Kim et al. (2015d). At the i-th word, we predict the most likely label y(k) under the k-th expert and compute an attention weight as ¯qdot i,k = h⊤ i ey(k) (13) Then we compute an expectation over the experts’ predictions ¯ai,k = exp(¯qi,k) PK k=1 exp(¯qi,k) (14) hlabel i = K X k=1 ¯ai,key(k) i (15) and use it in conjunction with ¯hi. Note that this makes the objective a function of discrete decision and thus non-differentiable, but we can still optimize it in a standard way treating it as learning a stochastic policy. Selective Attention Instead of computing attention over all K experts, we only consider the top K′ ≤K that predict the highest label scores. We only compute attention over these K′ vectors. We experiment with various values of K′ 5 Experiments In this section, we describe the set of experiments conducted to evaluate the performance of our model. In order to fully assess the contribution of our approach, we also consider several baselines and variants besides our primary expert model. Domain |I| |S| Description EVENTS 10 12 Buy event tickets FITNESS 10 9 Track health M-TICKET 8 15 Buy movie tickets ORDERPIZZA 19 27 Order pizza REMINDER 19 20 Remind task TAXI 8 13 Find/book an cab TV 7 5 Control TV Table 1: The number of intent types (|I|), the number of slot types (|S|), and a short description of the test domains. Overlapping Domain Intents Slots EVENTS 70.00% 75.00% FITNESS 30.00% 77.78% M-TICKET 37.50% 100.00% ORDERPIZZA 47.37% 74.07% REMINDER 68.42% 85.00% TAXI 50.00% 100.00% TV 57.14% 60.00% AVG 51.49% 81.69% Table 2: The overlapping percentage of intent types and slot types with experts or source domains. 5.1 Test domains and Tasks To test the effectiveness of our proposed approach, we apply it to a suite of 7 Microsoft Cortana domains with 2 separate tasks in spoken language understanding: (1) intent classification and (2) slot (label) tagging. The intent classification task is a multi-class classification problem with the goal of determining to which one of the |I| intents a user utterance belongs within a given domain. The slot tagging task is a sequence labeling problem with the goal of identifying entities and chunking of useful information snippets in a user utterance. For example, a user could say “reserve a table at joeys grill for thursday at seven pm for five people”. Then the goal of the first task would be to classify this utterance as “make reservation” intent given the places domain, and the goal of the second task would be to tag “joeys grill” as restaurant, “thursday” as date, “seven pm” as time, and “five” as number people. The short descriptions on the 7 test domains are shown in Table 1. As the table shows, the test domains have different granularity and diverse semantics. For each personal assistant test domain, 647 we only used 1000 training utterances to simulate scarcity of newly labeled data. The amount of development and test utterance was 100 and 10k respectively. The similarities of test domains, represented by overlapping percentage, with experts or source domains are represented in Table 2. The intent overlapping percentage ranges from 30% on FITNESS domain to 70% on EVENTS, which averages out at 51.49%. And the slots for test domains overlaps more with those of source domains ranging from 60% on TV domain to 100% on both M-TICKET and TAXI domains, which averages out at 81.69%. 5.2 Experimental Setup Category |D| Example Trans. 4 BUS, FLIGHT Time 4 ALARM, CALENDAR Media 5 MOVIE, MUSIC Action 5 HOMEAUTO, PHONE Loc. 3 HOTEL, BUSINESS Info 4 WEATHER, HEALTH TOTAL 25 Table 3: Overview of experts or source domains: Domain categories which have been created based on the label embeddings. These categorizations are solely for the purpose of describing domains because of the limited space and they are completely unrelated to the model. The number of sentences in each domain is in the range of 50k to 660k and the number of unique intents and slots are 200 and 500 respectively. In total, we have 25 domain-specific expert models. For the average performance, intent accuracy is 98% and slot F1 score is 96%. In testing our approach, we consider a domain adaptation (DA) scenario, where a target domain has a limited training data and the source domain has a sufficient amount of labeled data. We further consider a scenario, creating a new virtual domain targeting a specific scenario given a large inventory of intent and slot types and underlying models build for many different applications and scenarios. One typical use case is that of building natural language capability through intent and slot modeling (without actually building a domain classifier) targeting a specific application. Therefore, our experimental settings are rather different from previously considered settings for domain adaptation in two aspects: • Multiple source domains: In most previous works, only a pair of domains (source vs. target) have been considered, although they can be easily generalized to K > 2. Here, we experiment with K = 25 domains shown in Table 3. • Variant output: In a typical setting for domain adaptation, the label space is invariant across all domains. Here, the label space can be different in different domains, which is a more challenging setting. See Kim et al. (2015d) for details of this setting. For this DA scenario, we test whether our approach can effectively make a system to quickly generalize to a new domain with limited supervision given K existing domain experts shown in 3 . In summary, our approach is tested with 7 Microsoft Cortana personal assistant domains across 2 tasks of intent classification and slot tagging. Below shows more detail of our baselines and variants used in our experiments. Baselines: All models below use same underlying architecture described in Section 3.1 • TARGET: a model trained on a targeted domain without DA techniques. • UNION: a model trained on the union of a targeted domain and 25 domain experts. • DA: a neural domain adaptation method of Kim et al. (2016c) which trains domain specific K LSTMs with a generic LSTM on all domain training data. Domain Experts (DE) variants: All models below are based on attending on an ensemble of 25 domain experts (DE) described in Section 4.1, where a specific set of intent and slots models are trained for each domain. We have two feedback from domain experts: (1) feature representation from LSTM, and (2) label embedding from feedfoward described in Section 4.1 and Section 4.2, respectively. • DEB: DE without domain attention mechanism. It uses the unweighted combination of first feedback from experts like bag-of-word model. 648 • DE1: DE with domain attention with the weighted combination of the first feedbacks from experts. • DE2: DE1 with additional weighted combination of second feedbacks. • DES2: DE2 with selected attention mechanism, described in Section 4.2. In our experiments, all the models were implemented using Dynet (Neubig et al., 2017) and were trained using Stochastic Gradient Descent (SGD) with Adam (Kingma and Ba, 2015)—an adaptive learning rate algorithm. We used the initial learning rate of 4 × 10−4 and left all the other hyper parameters as suggested in Kingma and Ba (2015). Each SGD update was computed without a minibatch with Intel MKL (Math Kernel Library)4. We used the dropout regularization (Srivastava et al., 2014) with the keep probability of 0.4 at each LSTM layer. To encode user utterances, we used bidirectional LSTMs (BiLSTMs) at the character level and the word level, along with 25 dimensional character embedding and 100 dimensional word embedding. The dimension of both the input and output of the character LSTMs were 25, and the dimensions of the input and output of the word LSTMs were 1505 and 100, respectively. The dimension of the input and output of the final feedforward network for intent, and slot were 200 and the number of their corresponding task. Its activation was rectified linear unit (ReLU). To initialize word embedding, we used word embedding trained from (Lample et al., 2016). In the following sections, we report intent classification results in accuracy percentage and slot results in F1-score. To compute slot F1-score, we used the standard CoNLL evaluation script6 5.3 Results We show our results in the DA setting where we had a sufficient labeled dataset in the 25 source domains shown in Table 3, but only 1000 labeled data in the target domain. The performance of the baselines and our domain experts DE variants are shown in Table 4. The top half of the table shows 4https://software.intel.com/en-us/articles/intelr-mkl-andc-template-libraries 5We concatenated last two outputs from the character LSTM and word embedding, resulting in 150 (25+25+100) 6http://www.cnts.ua.ac.be/conll2000/chunking/output.html the results of intent classification and the results of slot tagging is in the bottom half. The baseline which trained only on the target domain (TARGET) shows a reasonably good performance, yielding on average 87.7% on the intent classification and 83.9% F1-score on the slot tagging. Simply training a single model with aggregated utterance across all domains (UNION) brings the performance down to 77.4% and 75.3%. Using DA approach of Kim et al. (2016c) shows a significant increase in performance in all 7 domains, yielding on average 90.3% intent accuracy and 86.2%. The DE without domain attention (DEB) shows similar performance compared to DA. Using DE model with domain attention (DE1) shows another increase in performance, yielding on average 90.9% intent accuracy and 86.9%. The performance increases again when we use both feature representation and label embedding (DE2), yielding on average 91.4% and 88.2% and observe nearly 93.6% and 89.1% when using selective attention (DES2). Note that DES2 selects the appropriate number of experts per layer by evaluation on a development set. The results show that our expert variant approach (DES2) achieves a significant performance gain in all 7 test domains, yielding average error reductions of 47.97% for intent classification and 32.30% for slot tagging. The results suggest that our expert approach can quickly generalize to a new domain with limited supervision given K existing domains by having only a handful more data of 1k newly labeled data points. The poor performance of using the union of both source and target domain data might be due to the relatively very small size of the target domain data, overwhelmed by the data in the source domain. For example, a word such as “home” can be labeled as place type under the TAXI domain, but in the source domains can be labeled as either home screen under the PHONE domain or contact name under the CALENDAR domain. 5.4 Training Time The Figure 3 shows the time required for training DES2 and DA of Kim et al. (2016c). The training time for DES2 stays almost constant as the number of source domains increases. However, the training time for DA grows exponentially in the number of source domains. Specifically, when trained 649 Task Domain TARGET UNION DA DEB DE1 DE2 DES2 Intent EVENTS 88.3 78.5 89.9 93.1 92.5 92.7 94.5 FITNESS 88.0 77.7 92.0 92.0 91.2 91.8 94.0 M-TICKET 88.2 79.2 91.9 94.4 91.5 92.7 93.4 ORDERPIZZA 85.8 76.6 87.8 89.3 89.4 90.8 92.8 REMINDER 87.2 76.3 91.2 90.0 90.5 90.2 93.1 TAXI 87.3 76.8 89.3 89.9 89.6 89.2 93.7 TV 88.9 76.4 90.3 81.5 91.5 92.0 94.0 AVG 87.7 77.4 90.3 90.5 90.9 91.4 93.6 Slot EVENTS 84.8 76.1 87.1 87.4 88.1 89.4 90.2 FITNESS 84.0 75.6 86.4 86.3 87.0 88.1 88.9 M-TICKET 84.2 75.6 86.4 86.1 86.8 88.4 89.7 ORDERPIZZA 82.3 73.6 84.2 84.4 85.0 86.3 87.1 REMINDER 83.5 75.0 85.9 86.3 87.0 88.3 89.2 TAXI 83.0 74.6 85.6 85.5 86.3 87.5 88.6 TV 85.4 76.7 87.7 87.6 88.3 89.3 90.1 AVG 83.9 75.3 86.2 86.2 86.9 88.2 89.1 Table 4: Intent classification accuracy (%) and slot tagging F1-score (%) of our baselines and variants of DE. The numbers in boldface indicate the best performing methods. Figure 3: Comparison of training time between our DES2 model and DA model of Kim et al. (2016c) as the number of domains increases. The horizontal axis means the number of domains, the vertical axis is training time per epoch in seconds. Here we use CALENDAR as the target domain, which has 1k training data. with 1 source or expert domain, both took around a minute per epoch on average. When training with full 25 source domains, DES2 took 3 minutes per epoch while DA took 30 minutes per epoch. Since we need to iterate over all 25+1 domains to re-train the global model, the net training time ratio could be over 250. EVENTS FITNESS M-TICKET ORDERPIZZA REMINDER TAXI TV Number of Experts Intent Accuracy (%) Figure 4: Learning curves in accuracy across all seven test domains as the number of expert domains increases. 5.5 Learning Curve We also measured the performance of our methods as a function of the number of domain experts. For each test domain, we consider all possible sizes of experts ranging from 1 to 25 and we then take the average of the resulting performances obtained from the expert sets of all different sizes. Figure 4 shows the resulting learning curves for each test domain. The overall trend is clear: as the more expert domains are added, the more the test performance improves. With ten or more expert domains added, our method starts to get saturated achiev650 REMINDER TAXI M-TICKET 0.92 0.84 0.93 Figure 5: Heatmap visualizing attention weights. ing more than 90% in accuracy across all seven domains. 5.6 Attention weights From the heatmap shown in Figure 5, we can see that the attention strength generally agrees with common sense. For example, the M-TICKET and TAXI domain selected MOVIE and PLACES as their top experts, respectively. 5.7 Oracle Expert Domain TARGET DE2 Top 1 ALARM 70.1 98.2 ALARM (.99) HOTEL 65.2 96.9 HOTEL (.99) Table 5: Intent classification accuracy with an oracle expert in the expert pool. The results in Table 5 show the intent classification accuracy of DE2 when we already have the same domain expert in the expert pool. To simulate such a situation, we randomly sampled 1,000, 100, and 100 utterances from each domain as training, development and test data, respectively. In both ALARM and HOTEL domains, the trained models only on the 1,000 training utterances (TARGET) achieved only 70.1%and 65.2% in accuracy, respectively. Whereas, with our method (DE2) applied, we reached almost the full training performance by selectively paying attention to the oracle expert, yielding 98.2% and 96.9%, respectively. This result again confirms that the behavior of the trained attention network indeed matches the semantic closeness between different domains. 5.8 Selective attention The results in Table 6 examines how the intent prediction accuracy of DES2 varies with respect to the Domain Top 1 Top 3 Top 5 Top 25 EVENTS 98.1 98.8 99.2 96.4 TV 81.4 82.0 81.7 80.9 AVG 89.8 90.4 90.5 88.7 Table 6: Accuracies of DES2 using different number of experts. number of experts in the pool. The rationale behind DES2 is to alleviate the downside of soft attention, namely distributing probability mass over all items even if some are bad items. To deal with such issues, we apply a hard cut-off at top k domains. From the result, a threshold at top 3 or 5 yielded better results than that of either 1 or 25 experts. This matches our common sense that their are only a few of domains that are close enough to be of help to a test domain. Thus it is advisable to find the optimal k value through several rounds of experiments on a development dataset. 6 Conclusion In this paper, we proposed a solution for scaling domains and experiences potentially to a large number of use cases by reusing existing data labeled for different domains and applications. Our solution is based on attending an ensemble of domain experts. When given a new domain, our model uses a weighted combination of domain experts’ feedback along with its own opinion to make prediction on the new domain. In both intent classification and slot tagging tasks, the model significantly outperformed baselines that do not use domain adaptation and also performed better than the full re-training approach. This approach enables creation of new virtual domains through a weighted combination of domain experts’ feedback reducing the need to collect and annotate the similar intent and slot types multiple times for different domains. Future work can include an extension of domain experts to take into account dialog history aiming for a holistic framework that can handle contextual interpretation as well. 651 References Tasos Anastasakos, Young-Bum Kim, and Anoop Deoras. 2014. Task specific continuous word representations for mono and multi-lingual spoken language understanding. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, pages 3246–3250. John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL. volume 7, pages 440–447. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 conference on empirical methods in natural language processing. Association for Computational Linguistics, pages 120–128. Asli Celikyilmaz, Ruhi Sarikaya, Minwoo Jeong, and Anoop Deoras. 2016. An empirical investigation of word class-based features for natural language understanding. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP) 24(6):994–1005. Asli Celikyilmaz, Silicon Valley, and Dilek HakkaniTur. 2010. Convolutional neural network based semantic tagging with entity embeddings. genre . Yun-Nung Chen, Dilek Hakkani-T¨ur, and Xiaodong He. 2016. Zero-shot learning of intent embeddings for expansion by convolutional deep structured semantic models. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, pages 6045–6049. Hal Daum´e III. 2009. Frustratingly easy domain adaptation. arXiv preprint arXiv:0907.1815 . Hal Daume III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research 26:101–126. Ali El-Kahky, Derek Liu, Ruhi Sarikaya, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2014. Extending domain coverage of language understanding systems via intent transfer between domains using knowledge graphs and search query click logs. IEEE, Proceedings of the ICASSP. Emmanuel Ferreira, Bassam Jabaian, and Fabrice Lef`evre. 2015. Zero-shot semantic parser for spoken language understanding. In Sixteenth Annual Conference of the International Speech Communication Association. Alex Graves. 2012. Neural networks. In Supervised Sequence Labelling with Recurrent Neural Networks, Springer, pages 15–35. Daniel Guo, Gokhan Tur, Wen-tau Yih, and Geoffrey Zweig. 2014. Joint semantic utterance classification and slot filling with recursive neural networks. In Spoken Language Technology Workshop (SLT), 2014 IEEE. IEEE, pages 554–559. Dilek Hakkani-T¨ur, Gokhan Tur, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and YeYi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Proceedings of The 17th Annual Meeting of the International Speech Communication Association. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Aaron Jaech, Larry Heck, and Mari Ostendorf. 2016. Domain adaptation of recurrent neural networks for natural language understanding. arXiv preprint arXiv:1604.00117 . Minwoo Jeong and Gary Geunbae Lee. 2008. Triangular-chain conditional random fields. IEEE Transactions on Audio, Speech, and Language Processing 16(7):1287–1302. Young-Bum Kim, Minwoo Jeong, Karl Stratos, and Ruhi Sarikaya. 2015a. Weakly supervised slot tagging with partially labeled sequences from web search click logs. In Proceedings of the NAACL. Association for Computational Linguistics. Young-Bum Kim, Alexandre Rochette, and Ruhi Sarikaya. 2016a. Natural language model reusability for scaling to different domains. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics . Young-Bum Kim, Karl Stratos, and Dongchan Kim. 2017. Adversarial adaptation of synthetic or stale data. In Annual Meeting of the Association for Computational Linguistics. Young-Bum Kim, Karl Stratos, Xiaohu Liu, and Ruhi Sarikaya. 2015b. Compact lexicon selection with spectral methods. In Proc. of Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2015c. Pre-training of hidden-unit crfs. In Proc. of Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. pages 192–198. Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016b. Domainless adaptation by constrained decoding on a schema lattice. Proceedings of the 26th International Conference on Computational Linguistics (COLING) . Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016c. Frustratingly easy neural domain adaptation. Proceedings of the 26th International Conference on Computational Linguistics (COLING) . 652 Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016d. Scalable semi-supervised query classification using matrix sketching. In The 54th Annual Meeting of the Association for Computational Linguistics. page 8. Young-Bum Kim, Karl Stratos, Ruhi Sarikaya, and Minwoo Jeong. 2015d. New transfer learning techniques for disparate label sets. ACL. Association for Computational Linguistics . Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. The International Conference on Learning Representations (ICLR). . Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 . Xiao Li, Ye-Yi Wang, and Alex Acero. 2009. Extracting structured information from user queries with semi-supervised conditional random fields. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval. Bing Liu and Ian Lane. 2016a. Attention-based recurrent neural network models for joint intent detection and slot filling. In Interspeech 2016. pages 685–689. Bing Liu and Ian Lane. 2016b. Joint online spoken language understanding and language modeling with recurrent neural networks. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, Los Angeles. Xiaohu Liu and Ruhi Sarikaya. 2014. A discriminative model based entity dictionary weighting approach for spoken language understanding. In Spoken Language Technology Workshop (SLT). IEEE, pages 195–199. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, et al. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 . Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. 2011. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks 22(2):199–210. Ruhi Sarikaya. 2015. The technology powering personal digital assistants. Keynote at Interspeech, Dresden, Germany. Ruhi Sarikaya, Asli Celikyilmaz, Anoop Deoras, and Minwoo Jeong. 2014. Shrinkage based features for slot tagging with conditional random fields. In INTERSPEECH. pages 268–272. Ruhi Sarikaya, Paul Crook, Alex Marin, Minwoo Jeong, Jean-Philippe Robichaud, Asli Celikyilmaz, Young-Bum Kim, Alexandre Rochette, Omar Zia Khan, Xiuahu Liu, et al. 2016. An overview of endto-end language understanding and dialog management for personal digital assistants. In IEEE Workshop on Spoken Language Technology. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673–2681. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1):1929–1958. Gokhan Tur. 2006. Multitask learning for spoken language understanding. In In Proceedings of the ICASSP. Toulouse, France. Puyang Xu and Ruhi Sarikaya. 2013. Convolutional neural network based triangular crf for joint intent detection and slot filling. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, pages 78–83. Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. IJCAI. 653
2017
60
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 654–664 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1061 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 654–664 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1061 Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders Tiancheng Zhao, Ran Zhao and Maxine Eskenazi Language Technologies Institute Carnegie Mellon University Pittsburgh, Pennsylvania, USA {tianchez,ranzhao1,max+}@cs.cmu.edu Abstract While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making. 1 Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process. Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007). Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g. different strategies to recover from non-understanding (Yu et al., 2016). However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions. Thus, there has been a growing interest in applying encoder-decoder models (Sutskever et al., 2014) for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a). The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence. The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting. However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don’t know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b). There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response. Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a); (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016), encouraging responses that have long-term payoff (Li et al., 2016b), etc. Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level. Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them. Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the 654 discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input. To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable (Figure 1). This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network. Figure 1: Given A’s question, there exists many valid responses from B for different assumptions of the latent variables, e.g., B’s hobby. Specifically, our contributions are three-fold: 1. We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) (Yan et al., 2015; Sohn et al., 2015), which introduces a latent variable that can capture discourse-level variations as described above 2. We propose Knowledge-Guided CVAE (kgCVAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability. 3. We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015). We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques. 2 Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE. 2.1 Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community. Ideal output responses should be both coherent and diverse. However, most models end up with generic and dull responses. To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more specific responses. Li et al., (2016a) captured speakers’ characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model. Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses. On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models. Li et al,. (2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses. This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input. Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing. They introduced a searchbased loss that directly optimizes the networks for beam search decoding. The resulting model achieves better performance on word ordering, parsing and machine translation. Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation. Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering. 2.2 Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation. The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder. Then VAE applies a decoder network to reconstruct the original input using samples from z. To generate images, VAE first obtains a sample of z from the prior distribution, e.g. N(0, I), and then produces an image via the decoder network. A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g. generating different human faces given skin color (Yan et al., 2015; Sohn et al., 2015). Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE 655 to generate diverse responses instead of images. Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial. Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable. They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder. They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable. We refer to this issue as the vanishing latent variable problem. Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses. To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem. 3 Proposed Models Figure 2: Graphical models of CVAE (a) and kgCVAE (b) 3.1 Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k −1), the response utterance x (the kth utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses. Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g. the topic). We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c). We refer to pθ(z|c) as the prior network and pθ(x, |z, c) as the response decoder. Then the generative process of x is (Figure 2 (a)): 1. Sample a latent variable z from the prior network pθ(z|c). 2. Generate x through the response decoder pθ(x|z, c). CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z. As proposed in (Sohn et al., 2015; Yan et al., 2015), CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood. We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network qφ(z|x, c) to approximate the true posterior distribution p(z|x, c). Sohn and et al,. (2015) have shown that the variational lower bound can be written as: L(θ, φ; x, c) = −KL(qφ(z|x, c)∥pθ(z|c)) + Eqφ(z|c,x)[log pθ(x|z, c)] (1) ≤log p(x|c) Figure 3 demonstrates an overview of our model. The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN ui = [⃗hi, ⃗ hi]. x is simply uk. The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u1:k−1 and the corresponding conversation floor as inputs. The last hidden state hc of the context encoder is concatenated with meta features and c = [hc, m]. Since we assume z follows isotropic Gaussian distribution, the recognition network qφ(z|x, c) ∼N(µ, σ2I) and the prior network pθ(z|c) ∼N(µ′, σ′2I), and then we have:  µ log(σ2)  = Wr x c  + br (2)  µ′ log(σ′2)  = MLPp(c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either 656 Figure 3: The neural network architectures for the baseline and the proposed CVAE/kgCVAE models. L denotes the concatenation of the input vectors. The dashed blue connections only appear in kgCVAE. from N(z; µ, σ2I) predicted by the recognition network (training) or N(z; µ′, σ′2I) predicted by the prior network (testing). Finally, the response decoder is a 1-layer GRU network with initial state s0 = Wi[z, c]+bi. The response decoder then predicts the words in x sequentially. 3.2 Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data. On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation. For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system. Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training. In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y. Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2. Specifically, during training the initial state of the response decoder is s0 = Wi[z, c, y] + bi and the input at every step is [et, y] where et is the word embedding of tth word in x. In addition, there is an MLP to predict y′ = MLPy(z, c) based on z and c. In the testing stage, the predicted y′ is used by the response decoder instead of the oracle decoders. We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture. KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(qφ(z|x, c, y)∥Pθ(z|c)) + Eqφ(z|c,x,y)[log p(x|z, c, y)] + Eqφ(z|c,x,y)[log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g. dialog act) along with the wordlevel responses, which allows easier interpretation of the model’s outputs. 3.3 Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015). Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0. We found that CVAE suffers from the same issue when the decoder is an RNN. Also we did not consider word drop decoding because Bowman et al,. (2015) have shown that it may hurt the performance when the drop rate is too high. As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss. The idea is to introduce 657 an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3(b). We decompose x into two variables: xo with word order and xbow without order, and assume that xo and xbow are conditionally independent given z and c: p(x, z|c) = p(xo|z, c)p(xbow|z, c)p(z|c). Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response. Let f = MLPb(z, x) ∈RV where V is vocabulary size, and we have: log p(xbow|z, c) = log |x| Y t=1 efxt PV j efj (5) where |x| is the length of x and xt is the word index of tth word in x. The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L′(θ, φ; x, c) = L(θ, φ; x, c) + Eqφ(z|c,x,y)[log p(xbow|z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique. 4 Experiment Setup 4.1 Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models. SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment. In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion. There are 70 available topics. We randomly split the data into 2316/60/62 dialogs for train/validate/test. The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary. The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test. Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000). We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015). The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances. We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations. There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data. Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer. 4.2 Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere. We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014). The utterance encoder has a hidden size of 300 for each direction. The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400. The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity. The latent variable z has a size of 200. The context window k is 10. All the initial weights are sampled from a uniform distribution [-0.08, 0.08]. The mini-batch size is 30. The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5. We selected the best models based on the variational lower bound on the validate data. Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance. Section 5.4 gives a detailed argument for the importance of the BOW loss. 5 Results 5.1 Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE. The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a). The baseline model’s encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3. The encoded context c is directly fed into the decoder networks as the initial state. The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss. Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam658 pling from the softmax. For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. 5.2 Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge (Liu et al., 2016). Following our one-tomany hypothesis, we propose the following metrics. We assume that for a given dialog context c, there exist Mc reference responses rj, j ∈[1, Mc]. Meanwhile a model can generate N hypothesis responses hi, i ∈[1, N]. The generalized responselevel precision/recall for a given dialog context is: precision(c) = PN i=1 maxj∈[1,Mc]d(rj, hi) N recall(c) = PMc j=1 maxi∈[1,N]d(rj, hi)) Mc where d(rj, hi) is a distance function which lies between 0 to 1 and measures the similarities between rj and hi. The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: 1. Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015). We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale. 2. Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016). The d(rj, hi) is the cosine distance of the two embedding vectors. We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow. 3. Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model. We set d(rj, hi) = 1 if rj and hi have the same dialog acts, otherwise d(rj, hi) = 0. One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts. This impacts reliability of our measures. Inspired by (Sordoni et al., 2015), we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics. Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier. The result is 6.69 extra references in average per context. The average number of distinct reference dialog acts is 4.2. Table 1 shows the results. Metrics Baseline CVAE kgCVAE perplexity (KL) 35.4 (n/a) 20.2 (11.36) 16.02 (13.08) BLEU-1 prec 0.405 0.372 0.412 BLEU-1 recall 0.336 0.381 0.411 BLEU-2 prec 0.300 0.295 0.350 BLEU-2 recall 0.281 0.322 0.356 BLEU-3 prec 0.272 0.265 0.310 BLEU-3 recall 0.254 0.292 0.318 BLEU-4 prec 0.226 0.223 0.262 BLEU-4 recall 0.215 0.248 0.272 A-bow prec 0.387 0.389 0.373 A-bow recall 0.337 0.361 0.336 E-bow prec 0.701 0.705 0.711 E-bow recall 0.684 0.709 0.712 DA prec 0.736 0.704 0.721 DA recall 0.514 0.604 0.598 Table 1: Performance of each model on automatic measures. The highest score in each row is in bold. Note that our BLEU scores are normalized to [0, 1]. The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance. This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity. As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses. However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW). One reason 659 for kgCVAE’s good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words. We further analyze the precision/recall of BLEU4 by looking at the average score versus the number of distinct reference dialog acts. A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy). Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts. Also it shows that CVAE suffers from lower precision, especially in low entropy contexts. Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy. Figure 4: BLEU-4 precision/recall vs. the number of distinct reference dialog acts. 5.3 Qualitative Analysis Table 2 shows the outputs generated from the baseline and kgCVAE. In example 1, caller A begins with an open-ended question. The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts. Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y. On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e. ”I’m”. Example 2 is a situation where caller A is telling B stories. The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener. The baseline successfully predicts ”uh-huh”. The kgCVAE model is also able to generate various ways of back-channeling. This implies that the latent z is able to capture context-sensitive variations, i.e. in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity. Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context. In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups. Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008). We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption. Figure 5: t-SNE visualization of the posterior z for test responses with top 8 frequent dialog acts. The size of circle represents the response length. 5.4 Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder. To compare with past work (Bowman et al., 2015), we conducted the same language modelling (LM) task on Penn Treebank using VAE. The network architecture is same except we use GRU instead of LSTM. We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA. Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost. For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches. Table 3 shows the reconstruction perplexity and the KL cost on the test dataset. The standard VAE fails to learn a meaningful latent variable by hav660 Example 1-Topic: Recycling Context: A: are they doing a lot of recycling out in Georgia? Target-B (statement): well at my workplace we have palaces for aluminium cans and we have a separate trash can for recyclable paper Baseline+Sampling kgCVAE+Greedy 1. well I’m a graduate student and have two kids 1. (non-understand) pardon 2. well I was in last year and so we’ve had lots of recycling 2. (statement) oh you’re not going to have a curbside pick up here 3. I’m not sure 3. (statement) okay I am sure about a recycling center 4. well I don’t know I just moved here in new york 4. (yes-answer) yeah so Example 2-Topic: Child Care Context: A: you know a private home to take their children to when they’re young until they hit the preschool age and they Target-B (backchannel): uh-huh Baseline+Sampling kgCVAE+Greedy 1. um - hum 1. (backchannel) uh-huh 2. yeah 2. (turn-exit) um-hum 3. um - hum 3. (backchannel) yeah 4. uh-huh 4. (statement) oh yeah I think that’s part of the problem Table 2: Generated responses from the baselines and kgCVAE in two examples. KgCVAE also provides the predicted dialog act for each response. The context only shows the last utterance due to space limit (the actual context window size is 10). ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014). KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1. At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost. Model Perplexity KL cost Standard 122.0 0.05 KLA 111.5 2.02 BOW 97.72 7.41 BOW+KLA 73.04 15.94 Table 3: The reconstruction perplexity and KL terms on Penn Treebank test set. Figure 6 visualizes the evolution of the KL cost. We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers. On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small. However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation. The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder. Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments. Figure 6: The value of the KL divergence during training with different setups on Penn Treebank. 6 Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level. While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog. In turn, the output of this novel neural dialog model will be easier to explain and control by humans. In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc. Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents. All of the above suggest a promising research direction. 661 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. arXiv preprint arXiv:1608.04207 . Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python. ” O’Reilly Media, Inc.”. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research 3(Jan):993–1022. Dan Bohus and Alexander I Rudnicky. 2003. Ravenclaw: Dialog management using hierarchical task decomposition and an expectation agenda . Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349 . Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentencelevel bleu. ACL 2014 page 362. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Gabriel Forgues, Joelle Pineau, Jean-Marie Larchevˆeque, and R´eal Tremblay. 2014. Bootstrapping dialog systems with word embeddings. In NIPS, Modern Machine Learning and Natural Language Processing Workshop. John J Godfrey and Edward Holliman. 1997. Switchboard-1 release 2. Linguistic Data Consortium, Philadelphia . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114 . Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055 . Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155 . Jiwei Li, Will Monroe, Alan Ritter, and Dan Jurafsky. 2016b. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541 . Diane J Litman and James F Allen. 1987. A plan recognition model for subdialogues in conversations. Cognitive science 11(2):163–200. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023 . Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research 9(Nov):2579–2605. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, pages 311–318. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532– 43. Massimo Poesio and David Traum. 1998. Towards an axiomatization of dialogue acts. In Proceedings of the Twente Workshop on the Formal Semantics and Pragmatics of Dialogues (13th Twente Workshop on Language Technology. Citeseer. Antoine Raux, Brian Langner, Dan Bohus, Alan W Black, and Maxine Eskenazi. 2005. Lets go public! taking a spoken dialog system to the real world. In in Proc. of Interspeech 2005. Citeseer. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082 . Eug´enio Ribeiro, Ricardo Ribeiro, and David Martins de Matos. 2015. The influence of context on dialogue act recognition. arXiv preprint arXiv:1506.00839 . Gerard Salton and Christopher Buckley. 1988. Termweighting approaches in automatic text retrieval. Information processing & management 24(5):513– 523. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673–2681. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016a. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI-16). Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016b. A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv:1605.06069 . 662 Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems. pages 3483–3491. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714 . Andreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Elizabeth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics 26(3):339–373. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Johan AK Suykens and Joos Vandewalle. 1999. Least squares support vector machine classifiers. Neural processing letters 9(3):293–300. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869 . Jason D Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language 21(2):393–422. Sam Wiseman and Alexander M Rush. 2016. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960 . Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2016. Topic augmented neural response generation with a joint attention mechanism. arXiv preprint arXiv:1606.08340 . Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. 2015. Attribute2image: Conditional image generation from visual attributes. arXiv preprint arXiv:1512.00570 . Zhou Yu, Ziyu Xu, Alan W Black, and Alex I Rudnicky. 2016. Strategy and policy learning for nontask-oriented conversational systems. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. volume 2, page 7. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 . Tiancheng Zhao and Maxine Eskenazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. arXiv preprint arXiv:1606.02560 . A Supplemental Material Variational Lower Bound for kgCVAE We assume that even with the presence of linguistic feature y regarding x, the prediction of xbow still only depends on the z and c. Therefore, we have: L(θ, φ; x, c, y) = −KL(qφ(z|x, c, y)∥Pθ(z|c)) + Eqφ(z|c,x,y)[log p(x|z, c, y)] + Eqφ(z|c,x,y)[log p(y|z, c)] + Eqφ(z|c,x,y)[log p(xbow|z, c)] (7) Collection of Multiple Reference Responses We collected multiple reference responses for each dialog context in the test set by information retrieval techniques combining with traditional a machine learning method. First, we encode the dialog history using Term Frequency-Inverse Document Frequency (TFIDF) (Salton and Buckley, 1988) weighted bag-of-words into vector representation h. Then we denote the topic of the conversation as t and denote f as the conversation floor, i.e. if the speakers of the last utterance in the dialog history and response utterance are the same f = 1 otherwise f = 0. Then we computed the similarity d(ci, cj) between two dialog contexts using: d(ci, cj) = 1(ti = tj)1(ti = tj) hi · hj ||hi||||hj|| (8) Unlike past work (Sordoni et al., 2015), this similarity function only cares about the distance in the context and imposes no constraints on the response, therefore is suitbale for finding diverse responses regarding to the same dialog context. Secondly, for each dialog context in the test set, we retrieved the 10 nearest neighbors from the training set and treated the responses from the training set as candidate reference responses. Thirdly, we further sampled 240 context-responses pairs from 5481 pairs in the total test set and post-processed the selected candidate responses by two human computational linguistic experts who were told to give a binary label for each candidate response about whether the response is appropriate regarding its dialog context. The filtered lists then served as the ground truth to train our reference response classifier. For the next step, we extracted bigrams, part-of-speech bigrams and word part-of-speech 663 pairs from both dialogue contexts and candidate reference responses with rare threshold for feature extraction being set to 20. Then L2-regularized logistic regression with 10-fold cross validation was applied as the machine learning algorithm. Cross validation accuracy on the human-labelled data was 71%. Finally, we automatically annotated the rest of test set with this trained classifier and the resulting data were used for model evaluation. 664
2017
61
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 665–677 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1062 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 665–677 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1062 Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning Jason D. Williams Microsoft Research [email protected] Kavosh Asadi Brown University [email protected] Geoffrey Zweig∗ Microsoft Research [email protected] Abstract End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors. We introduce Hybrid Code Networks (HCNs), which combine an RNN with domain-specific knowledge encoded as software and system action templates. Compared to existing end-toend approaches, HCNs considerably reduce the amount of training data required, while retaining the key benefit of inferring a latent representation of dialog state. In addition, HCNs can be optimized with supervised learning, reinforcement learning, or a mixture of both. HCNs attain stateof-the-art performance on the bAbI dialog dataset (Bordes and Weston, 2016), and outperform two commercially deployed customer-facing dialog systems. 1 Introduction Task-oriented dialog systems help a user to accomplish some goal using natural language, such as making a restaurant reservation, getting technical support, or placing a phonecall. Historically, these dialog systems have been built as a pipeline, with modules for language understanding, state tracking, action selection, and language generation. However, dependencies between modules introduce considerable complexity – for example, it is often unclear how to define the dialog state and what history to maintain, yet action selection relies exclusively on the state for input. Moreover, training each module requires specialized labels. ∗Currently at JPMorgan Chase Recently, end-to-end approaches have trained recurrent neural networks (RNNs) directly on text transcripts of dialogs. A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels. However, end-to-end methods lack a general mechanism for injecting domain knowledge and constraints. For example, simple operations like sorting a list of database results or updating a dictionary of entities can expressed in a few lines of software, yet may take thousands of dialogs to learn. Moreover, in some practical settings, programmed constraints are essential – for example, a banking dialog system would require that a user is logged in before they can retrieve account information. This paper presents a model for end-to-end learning, called Hybrid Code Networks (HCNs) which addresses these problems. In addition to learning an RNN, HCNs also allow a developer to express domain knowledge via software and action templates. Experiments show that, compared to existing recurrent end-to-end techniques, HCNs achieve the same performance with considerably less training data, while retaining the key benefit of end-to-end trainability. Moreover, the neural network can be trained with supervised learning or reinforcement learning, by changing the gradient update applied. This paper is organized as follows. Section 2 describes the model, and Section 3 compares the model to related work. Section 4 applies HCNs to the bAbI dialog dataset (Bordes and Weston, 2016). Section 5 then applies the method to real customer support domains at our company. Section 6 illustrates how HCNs can be optimized with reinforcement learning, and Section 7 concludes. 665 What’s the weather this week in Seattle? Choose action template Entity output Action type? Anything else? text Dense + softmax RNN Entity tracking Bag of words vector Forecast() 0.93 Normalization. X Anything else? 0.07 <city>, right? 0.00 API call API WeatherBot Utterance embedding 2 3 4 7 9 11 14 15 18 6 12 13 17 Action mask Context features Fully-formed action API result 1 5 Entity extraction t+1 t+1 8 10 t+1 16 Figure 1: Operational loop. Trapezoids refer to programmatic code provided by the software developer, and shaded boxes are trainable components. Vertical bars under “6” represent concatenated vectors which form the input to the RNN. 2 Model description At a high level, the four components of a Hybrid Code Network are a recurrent neural network; domain-specific software; domain-specific action templates; and a conventional entity extraction module for identifying entity mentions in text. Both the RNN and the developer code maintain state. Each action template can be a textual communicative action or an API call. The HCN model is summarized in Figure 1. The cycle begins when the user provides an utterance, as text (step 1). The utterance is featurized in several ways. First, a bag of words vector is formed (step 2). Second, an utterance embedding is formed, using a pre-built utterance embedding model (step 3). Third, an entity extraction module identifies entity mentions (step 4) – for example, identifying “Jennifer Jones” as a <name> entity. The text and entity mentions are then passed to “Entity tracking” code provided by the developer (step 5), which grounds and maintains entities – for example, mapping the text “Jennifer Jones” to a specific row in a database. This code can optionally return an “action mask”, indicating actions which are permitted at the current timestep, as a bit vector. For example, if a target phone number has not yet been identified, the API action to place a phone call may be masked. It can also optionally return “context features” which are features the developer thinks will be useful for distinguishing among actions, such as which entities are currently present and which are absent. The feature components from steps 1-5 are concatenated to form a feature vector (step 6). This vector is passed to an RNN, such as a long shortterm memory (LSTM) (Hochreiter and Schmidhuber, 1997) or gated recurrent unit (GRU) (Chung et al., 2014). The RNN computes a hidden state (vector), which is retained for the next timestep (step 8), and passed to a dense layer with a softmax activation, with output dimension equal to the number of distinct system action templates (step 9).1 Thus the output of step 9 is a distribution over action templates. Next, the action mask is applied as an element-wise multiplication, and the result is normalized back to a probability distribution (step 10) – this forces non-permitted actions to take on probability zero. From the resulting distribution (step 11), an action is selected (step 12). When RL is active, exploration is required, so in this case an action is sampled from the distribution; when RL is not active, the best action should be chosen, and so the action with the highest probability is always selected. The selected action is next passed to “Entity output” developer code that can substitute in entities (step 13) and produce a fully-formed action – for example, mapping the template “<city>, 1Implementation details for the RNN such as size, loss, etc. are given with each experiment in Sections 4-6. 666 right?” to “Seattle, right?”. In step 14, control branches depending on the type of the action: if it is an API action, the corresponding API call in the developer code is invoked (step 15) – for example, to render rich content to the user. APIs can act as sensors and return features relevant to the dialog, so these can be added to the feature vector in the next timestep (step 16). If the action is text, it is rendered to the user (step 17), and cycle then repeats. The action taken is provided as a feature to the RNN in the next timestep (step 18). 3 Related work Broadly there are two lines of work applying machine learning to dialog control. The first decomposes a dialog system into a pipeline, typically including language understanding, dialog state tracking, action selection policy, and language generation (Levin et al., 2000; Singh et al., 2002; Williams and Young, 2007; Williams, 2008; Hori et al., 2009; Lee et al., 2009; Griol et al., 2008; Young et al., 2013; Li et al., 2014). Specifically related to HCNs, past work has implemented the policy as feed-forward neural networks (Wen et al., 2016), trained with supervised learning followed by reinforcement learning (Su et al., 2016). In these works, the policy has not been recurrent – i.e., the policy depends on the state tracker to summarize observable dialog history into state features, which requires design and specialized labeling. By contrast, HCNs use an RNN which automatically infers a representation of state. For learning efficiency, HCNs use an external lightweight process for tracking entity values, but the policy is not strictly dependent on it: as an illustration, in Section 5 below, we demonstrate an HCNbased dialog system which has no external state tracker. If there is context which is not apparent in the text in the dialog, such as database status, this can be encoded as a context feature to the RNN. The second, more recent line of work applies recurrent neural networks (RNNs) to learn “endto-end” models, which map from an observable dialog history directly to a sequence of output words (Sordoni et al., 2015; Shang et al., 2015; Vinyals and Le, 2015; Yao et al., 2015; Serban et al., 2016; Li et al., 2016a,c; Luan et al., 2016; Xu et al., 2016; Li et al., 2016b; Mei et al., 2016; Lowe et al., 2017; Serban et al., 2017). These systems can be applied to task-oriented domains by adding special “API call” actions, enumerating database output as a sequence of tokens (Bordes and Weston, 2016), then learning an RNN using Memory Networks (Sukhbaatar et al., 2015), gated memory networks (Liu and Perez, 2016), query reduction networks (Seo et al., 2016), and copyaugmented networks (Eric and Manning, 2017). In each of these architectures, the RNN learns to manipulate entity values, for example by saving them in a memory. Output is produced by generating a sequence of tokens (or ranking all possible surface forms), which can also draw from this memory. HCNs also use an RNN to accumulate dialog state and choose actions. However, HCNs differ in that they use developer-provided action templates, which can contain entity references, such as “<city>, right?”. This design reduce learning complexity, and also enable the software to limit which actions are available via an action mask, at the expense of developer effort. To further reduce learning complexity in a practical system, entities are tracked separately, outside the the RNN, which also allows them to be substituted into action templates. Also, past end-to-end recurrent models have been trained using supervised learning, whereas we show how HCNs can also be trained with reinforcement learning. 4 Supervised learning evaluation I In this section we compare HCNs to existing approaches on the public “bAbI dialog” dataset (Bordes and Weston, 2016). This dataset includes two end-to-end dialog learning tasks, in the restaurant domain, called task5 and task6.2 Task5 consists of synthetic, simulated dialog data, with highly regular user behavior and constrained vocabulary. Dialogs include a database access action which retrieves relevant restaurants from a database, with results included in the dialog transcript. We test on the “OOV” variant of Task5, which includes entity values not observed in the training set. Task6 draws on human-computer dialog data from the second dialog state tracking challenge (DSTC2), where usability subjects (crowd-workers) interacted with several variants of a spoken dialog system (Henderson et al., 2014a). Since the database from DSTC2 was not provided, database calls have been inferred from the data and inserted into the dialog transcript. Example dialogs are provided in the Appendix Sections A.2 and A.3. To apply HCNs, we wrote simple domain2Tasks 1-4 are sub-tasks of Task5. 667 specific software, as follows. First, for entity extraction (step 4 in Figure 1), we used a simple string match, with a pre-defined list of entity names – i.e., the list of restaurants available in the database. Second, in the context update (step 5), we wrote simple logic for tracking entities: when an entity is recognized in the user input, it is retained by the software, over-writing any previously stored value. For example, if the price “cheap” is recognized in the first turn, it is retained as price=cheap. If “expensive” is then recognized in the third turn, it over-writes “cheap” so the code now holds price=expensive. Third, system actions were templatized: for example, system actions of the form “prezzo is a nice restaurant in the west of town in the moderate price range” all map to the template “<name> is a nice restaurant in the <location> of town in the <price> price range”. This results in 16 templates for Task5 and 58 for Task6.3 Fourth, when database results are received into the entity state, they are sorted by rating. Finally, an action mask was created which encoded common-sense dependencies. These are implemented as simple if-then rules based on the presence of entity values: for example, only allow an API call if pre-conditions are met; only offer a restaurant if database results have already been received; do not ask for an entity if it is already known; etc. For Task6, we noticed that the system can say that no restaurants match the current query without consulting the database (for an example dialog, see Section A.3 in the Appendix). In a practical system this information would be retrieved from the database and not encoded in the RNN. So, we mined the training data and built a table of search queries known to yield no results. We also added context features that indicated the state of the database – for example, whether there were any restaurants matching the current query. The complete set of context features is given in Appendix Section A.4. Altogether this code consisted of about 250 lines of Python. We then trained an HCN on the training set, employing the domain-specific software described above. We selected an LSTM for the recurrent layer (Hochreiter and Schmidhuber, 1997), with the AdaDelta optimizer (Zeiler, 2012). We used the development set to tune the number of hid3A handful of actions in Task6 seemed spurious; for these, we replaced them with a special “UNK” action in the training set, and masked this action at test time. den units (128), and the number of epochs (12). Utterance embeddings were formed by averaging word embeddings, using a publicly available 300dimensional word embedding model trained using word2vec on web data (Mikolov et al., 2013).4 The word embeddings were static and not updated during LSTM training. In training, each dialog formed one minibatch, and updates were done on full rollouts (i.e., non-truncated back propagation through time). The training loss was categorical cross-entropy. Further low-level implementation details are in the Appendix Section A.1. We ran experiments with four variants of our model: with and without the utterance embeddings, and with and without the action mask (Figure 1, steps 3 and 6 respectively). Following past work, we report average turn accuracy – i.e., for each turn in each dialog, present the (true) history of user and system actions to the network and obtain the network’s prediction as a string of characters. The turn is correct if the string matches the reference exactly, and incorrect if not. We also report dialog accuracy, which indicates if all turns in a dialog are correct. We compare to four past end-to-end approaches (Bordes and Weston, 2016; Liu and Perez, 2016; Eric and Manning, 2017; Seo et al., 2016). We emphasize that past approaches have applied purely sequence-to-sequence models, or (as a baseline) purely programmed rules (Bordes and Weston, 2016). By contrast, Hybrid Code Networks are a hybrid of hand-coded rules and learned models. Results are shown in Table 1. Since Task5 is synthetic data generated using rules, it is possible to obtain perfect accuracy using rules (line 1). The addition of domain knowledge greatly simplifies the learning task and enables HCNs to also attain perfect accuracy. On Task6, rules alone fare poorly, whereas HCNs outperform past learned models. We next examined learning curves, training with increasing numbers of dialogs. To guard against bias in the ordering of the training set, we averaged over 5 runs, randomly permuting the order of the training dialogs in each run. Results are in Figure 2. In Task5, the action mask and utterance embeddings substantially reduce the number of training dialogs required (note the horizontal axis scale is logarithmic). For Task6, the bene4Google News 100B model from https://github. com/3Top/word2vec-api 668 Task5-OOV Task6 Model Turn Acc. Dialog Acc. Turn Acc. Dialog Acc. Rules 100% 100% 33.3% 0.0% Bordes and Weston (2016) 77.7% 0.0% 41.1% 0.0% Liu and Perez (2016) 79.4% 0.0% 48.7% 1.4% Eric and Manning (2017) — — 48.0% 1.5% Seo et al. (2016) 96.0% — 51.1% — HCN 100% 100% 54.0% 1.2% HCN+embed 100% 100% 55.6% 1.3% HCN+mask 100% 100% 53.1% 1.9% HCN+embed+mask 100% 100% 52.7% 1.5% Table 1: Results on bAbI dialog Task5-OOV and Task6 (Bordes and Weston, 2016). Results for “Rules” taken from Bordes and Weston (2016). Note that, unlike cited past work, HCNs make use of domainspecific procedural knowledge. 20% 30% 40% 50% 60% 70% 80% 90% 100% 1 2 5 10 20 50 100 200 500 1000 Turn accuracy Supervised learning training dialogs HCN+mask+embed HCN+mask HCN+embed HCN (a) bAbI dialog Task5-OOV. 0% 10% 20% 30% 40% 50% 60% 1 2 5 10 20 50 100 200 500 1000 1618 Turn accuracy Supervised learning training dialogs HCN+mask+embed HCN+mask HCN+embed HCN (b) bAbI dialog Task6. Figure 2: Training dialog count vs. turn accuracy for bAbI dialog Task5-OOV and Task6. “embed” indicates whether utterance embeddings were included; “mask” indicates whether the action masking code was active. fits of the utterance embeddings are less clear. An error analysis showed that there are several systematic differences between the training and testing sets. Indeed, DSTC2 intentionally used different dialog policies for the training and test sets, whereas our goal is to mimic the policy in the training set. Nonetheless, these tasks are the best public benchmark we are aware of, and HCNs exceed performance of existing sequence-to-sequence models. In addition, they match performance of past models using an order of magnitude less data (200 vs. 1618 dialogs), which is crucial in practical settings where collecting realistic dialogs for a new domain can be expensive. 5 Supervised learning evaluation II We now turn to comparing with purely handcrafted approaches. To do this, we obtained logs from our company’s text-based customer support dialog system, which uses a sophisticated rulebased dialog manager. Data from this system is attractive for evaluation because it is used by real customers – not usability subjects – and because its rule-based dialog manager was developed by customer support professionals at our company, and not the authors. This data is not publicly available, but we are unaware of suitable humancomputer dialog data in the public domain which uses rules. Customers start using the dialog system by entering a brief description of their problem, such 669 as “I need to update my operating system”. They are then routed to one of several hundred domains, where each domain attempts to resolve a particular problem. In this study, we collected humancomputer transcripts for the high-traffic domains “reset password” and “cannot access account”. We labeled the dialog data as follows. First, we enumerated unique system actions observed in the data. Then, for each dialog, starting from the beginning, we examined each system action, and determined whether it was “correct”. Here, correct means that it was the most appropriate action among the set of existing system actions, given the history of that dialog. If multiple actions were arguably appropriate, we broke ties in favor of the existing rule-based dialog manager. Example dialogs are provided in the Appendix Sections A.5 and A.6. If a system action was labeled as correct, we left it as-is and continued to the next system action. If the system action was not correct, we replaced it with the correct system action, and discarded the rest of the dialog, since we do not know how the user would have replied to this new system action. The resulting dataset contained a mixture of complete and partial dialogs, containing only correct system actions. We partitioned this set into training and test dialogs. Basic statistics of the data are shown in Table 2. In this domain, no entities were relevant to the control flow, and there was no obvious mask logic since any question could follow any question. Therefore, we wrote no domain-specific software for this instance of the HCN, and relied purely on the recurrent neural network to drive the conversation. The architecture and training of the RNN was the same as in Section 4, except that here we did not have enough data for a validation set, so we instead trained until we either achieved 100% accuracy on the training set or reached 200 epochs. To evaluate, we observe that conventional measures like average dialog accuracy unfairly penalize the system used to collect the dialogs – in our case, the rule-based system. If the system used for collection makes an error at turn t, the labeled dialog only includes the sub-dialog up to turn t, and the system being evaluated off-line is only evaluated on that sub-dialog. In other words, in our case, reporting dialog accuracy would favor the HCN because it would be evaluated on fewer turns than the rule-based system. We therefore Forgot Account password Access Av. sys. turns/dialog 2.2 2.2 Max. sys. turns/dialog 5 9 Av. words/user turn 7.7 5.4 Unique sys. actions 7 16 Train dialogs 422 56 Test dialogs 148 60 Test acc. (rules) 64.9% 42.1% Table 2: Basic statistics of labeled customer support dialogs. Test accuracy refers to whole-dialog accuracy of the existing rule-based system. use a comparative measure that examines which method produces longer continuous sequences of correct system actions, starting from the beginning of the dialog. Specifically, we report ∆P = C(HCN-win)−C(rule-win) C(all) , where C(HCN-win) is the number of test dialogs where the rule-based approach output a wrong action before the HCN; C(rule-win) is the number of test dialogs where the HCN output a wrong action before the rulebased approach; and C(all) is the number of dialogs in the test set. When ∆P > 0, there are more dialogs in which HCNs produce longer continuous sequences of correct actions starting from the beginning of the dialog. We run all experiments 5 times, each time shuffling the order of the training set. Results are in Figure 3. HCNs exceed performance of the existing rule-based system after about 30 dialogs. In these domains, we have a further source of knowledge: the rule-based dialog managers themselves can be used to generate example “sunnyday” dialogs, where the user provides purely expected inputs. From each rule-based controller, synthetic dialogs were sampled to cover each expected user response at least once, and added to the set of labeled real dialogs. This resulted in 75 dialogs for the “Forgot password” domain, and 325 for the “Can’t access account” domain. Training was repeated as described above. Results are also included in Figure 3, with the suffix “sampled”. In the “Can’t access account” domain, the sampled dialogs yield a large improvement, probably because the flow chart for this domain is large, so the sampled dialogs increase coverage. The gain in the “forgot password” domain is present but smaller. In summary, HCNs can out-perform 670 -40% -30% -20% -10% 0% 10% 20% 0 20 40 60 80 100 ΔP Labeled supervised learning training dialogs HCN+embed+sampled HCN+sampled HCN+embed HCN (a) “Forgot password” domain. -40% -30% -20% -10% 0% 10% 20% 30% 0 10 20 30 40 50 ΔP Labeled supervised learning training dialogs HCN+embed+sampled HCN+sampled HCN+embed HCN (b) “Can’t access account” domain. Figure 3: Training dialogs vs. ∆P, where ∆P is the fraction of test dialogs where HCNs produced longer initial correct sequences of system actions than the rules, minus the fraction where rules produced longer initial correct sequences than the HCNs. “embed” indicates whether utterance embeddings were included; “sampled” indicates whether dialogs sampled from the rule-based controller were included in the training set. production-grade rule-based systems with a reasonable number of labeled dialogs, and adding synthetic “sunny-day” dialogs improves performance further. Moreover, unlike existing pipelined approaches to dialog management that rely on an explicit state tracker, this HCN used no explicit state tracker, highlighting an advantage of the model. 6 Reinforcement learning illustration In the previous sections, supervised learning (SL) was applied to train the LSTM to mimic dialogs provided by the system developer. Once a system operates at scale, interacting with a large number of users, it is desirable for the system to continue to learn autonomously using reinforcement learning (RL). With RL, each turn receives a measurement of goodness called a reward; the agent explores different sequences of actions in different situations, and makes adjustments so as to maximize the expected discounted sum of rewards, which is called the return, denoted G. For optimization, we selected a policy gradient approach (Williams, 1992), which has been successfully applied to dialog systems (Jurˇc´ıˇcek et al., 2011), robotics (Kohl and Stone, 2004), and the board game Go (Silver et al., 2016). In policy gradient-based RL, a model π is parameterized by w and outputs a distribution from which actions are sampled at each timestep. At the end of a trajectory – in our case, dialog – the return G for that trajectory is computed, and the gradients of the probabilities of the actions taken with respect to the model weights are computed. The weights are then adjusted by taking a gradient step proportional to the return: w ←w+α( X t ▽w log π(at|ht; w))(G−b) (1) where α is a learning rate; at is the action taken at timestep t; ht is the dialog history at time t; G is the return of the dialog; ▽xF denotes the Jacobian of F with respect to x; b is a baseline described below; and π(a|h; w) is the LSTM – i.e., a stochastic policy which outputs a distribution over a given a dialog history h, parameterized by weights w. The baseline b is an estimate of the average return of the current policy, estimated on the last 100 dialogs using weighted importance sampling.5 Intuitively, “better” dialogs receive a positive gradient step, making the actions selected more likely; and “worse” dialogs receive a negative gradient step, making the actions selected less likely. SL and RL correspond to different methods of updating weights, so both can be applied to the same network. However, there is no guarantee that the optimal RL policy will agree with the SL training set; therefore, after each RL gradient step, we 5The choice of baseline does not affect the long-term convergence of the algorithm (i.e., the bias), but can dramatically affect the speed of convergence (i.e., the variance) (Williams, 1992). 671 check whether the updated policy reconstructs the training set. If not, we re-run SL gradient steps on the training set until the model reproduces the training set. Note that this approach allows new training dialogs to be added at any time during RL optimization. We illustrate RL optimization on a simulated dialog task in the name dialing domain. In this system, a contact’s name may have synonyms (“Michael” may also be called “Mike”), and a contact may have more than one phone number, such as “work” or “mobile”, which may in turn have synonyms like “cell” for “mobile”. This domain has a database of names and phone numbers taken from the Microsoft personnel directory, 5 entity types – firstname, nickname, lastname, phonenumber, and phonetype – and 14 actions, including 2 API call actions. Simple entity logic was coded, which retains the most recent copy of recognized entities. A simple action mask suppresses impossible actions, such as placing a phonecall before a phone number has been retrieved from the database. Example dialogs are provided in Appendix Section A.7. To perform optimization, we created a simulated user. At the start of a dialog, the simulated user randomly selected a name and phone type, including names and phone types not covered by the dialog system. When speaking, the simulated user can use the canonical name or a nickname; usually answers questions but can ignore the system; can provide additional information not requested; and can give up. The simulated user was parameterized by around 10 probabilities, set by hand. We defined the reward as being 1 for successfully completing the task, and 0 otherwise. A discount of 0.95 was used to incentivize the system to complete dialogs faster rather than slower, yielding return 0 for failed dialogs, and G = 0.95T−1 for successful dialogs, where T is the number of system turns in the dialog. Finally, we created a set of 21 labeled dialogs, which will be used for supervised learning. For the RNN in the HCN, we again used an LSTM with AdaDelta, this time with 32 hidden units. RL policy updates are made after each dialog. Since a simulated user was employed, we did not have real user utterances, and instead relied on context features, omitting bag-of-words and utterance embedding features. We first evaluate RL by randomly initializing an 0% 10% 20% 30% 40% 50% 60% 70% Dialog success rate Reinforcement learning training dialogs 10 interleaved 10 initial 5 initial 3 initial 1 initial 0 Figure 4: Dialog success rate vs. reinforcement learning training dialogs. Curve marked “0” begins with a randomly initialized LSTM. Curves marked “N initial” are pre-trained with N labeled dialogs. Curve marked “10, interleaved” adds one SL training dialog before RL dialog 0, 100, 200, ... 900. LSTM, and begin RL optimization. After 10 RL updates, we freeze the policy, and run 500 dialogs with the user simulation to measure task completion. We repeat all of this for 100 runs, and report average performance. In addition, we also report results by initializing the LSTM using supervised learning on the training set, consisting of 1, 2, 5, or 10 dialogs sampled randomly from the training set, then running RL as described above. Results are in Figure 4. Although RL alone can find a good policy, pre-training with just a handful of labeled dialogs improves learning speed dramatically. Additional experiments, not shown for space, found that ablating the action mask slowed training, agreeing with Williams (2008). Finally, we conduct a further experiment where we sample 10 training dialogs, then add one to the training set just before RL dialog 0, 100, 200, ... , 900. Results are shown in Figure 4. This shows that SL dialogs can be introduced as RL is in progress – i.e., that it is possible to interleave RL and SL. This is an attractive property for practical systems: if a dialog error is spotted by a developer while RL is in progress, it is natural to add a training dialog to the training set. 7 Conclusion This paper has introduced Hybrid Code Networks for end-to-end learning of task-oriented dialog 672 systems. HCNs support a separation of concerns where procedural knowledge and constraints can be expressed in software, and the control flow is learned. Compared to existing end-to-end approaches, HCNs afford more developer control and require less training data, at the expense of a small amount of developer effort. Results in this paper have explored three different dialog domains. On a public benchmark in the restaurants domain, HCNs exceeded performance of purely learned models. Results in two troubleshooting domains exceeded performance of a commercially deployed rule-based system. Finally, in a name-dialing domain, results from dialog simulation show that HCNs can also be optimized with a mixture of reinforcement and supervised learning. In future work, we plan to extend HCNs by incorporating lines of existing work, such as integrating the entity extraction step into the neural network (Dhingra et al., 2017), adding richer utterance embeddings (Socher et al., 2013), and supporting text generation (Sordoni et al., 2015). We will also explore using HCNs with automatic speech recognition (ASR) input, for example by forming features from n-grams of the ASR n-best results (Henderson et al., 2014b). Of course, we also plan to deploy the model in a live dialog system. More broadly, HCNs are a general model for stateful control, and we would be interested to explore applications beyond dialog systems – for example, in NLP medical settings or humanrobot NL interaction tasks, providing domain constraints are important for safety; and in resourcepoor settings, providing domain knowledge can amplify limited data. References Antoine Bordes and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. CoRR abs/1605.07683. http://arxiv.org/abs/1605.07683. Franois Chollet. 2015. Keras. https://github. com/fchollet/keras. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Proc NIPS 2014 Deep Learning and Representation Learning Workshop. Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dialogue agents for information access. In Proc Association for Computational Linguistics, Vancouver, Canada. Mihail Eric and Christopher D Manning. 2017. A copy-augmented sequence-to-sequence architecture gives good performance on taskoriented dialogue. CoRR abs/1701.04024. https://arxiv.org/abs/1701.04024. David Griol, Llus F. Hurtado, Encarna Segarra, and Emilio Sanchis. 2008. A statistical approach to spoken dialog systems design and evaluation. Speech Communication 50(8–9). Matthew Henderson, Blaise Thomson, and Jason Williams. 2014a. The second dialog state tracking challenge. In Proc SIGdial Workshop on Discourse and Dialogue, Philadelphia, USA. Matthew Henderson, Blaise Thomson, and Steve Young. 2014b. Word-based Dialog State Tracking with Recurrent Neural Networks. In Proc SIGdial Workshop on Discourse and Dialogue, Philadelphia, USA. Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. Chiori Hori, Kiyonori Ohtake, Teruhisa Misu, Hideki Kashioka, and Satoshi Nakamura. 2009. Statistical dialog management applied to WFSTbased dialog systems. In Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on. pages 4793–4796. https://doi.org/10.1109/ICASSP.2009.4960703. Filip Jurˇc´ıˇcek, Blaise Thomson, and Steve Young. 2011. Natural actor and belief critic: Reinforcement algorithm for learning parameters of dialogue systems modelled as pomdps. ACM Transactions on Speech and Language Processing (TSLP) 7(3):6. Nate Kohl and Peter Stone. 2004. Policy gradient reinforcement learning for fast quadrupedal locomotion. In Robotics and Automation, 2004. Proceedings. ICRA’04. 2004 IEEE International Conference on. IEEE, volume 3, pages 2619–2624. Cheongjae Lee, Sangkeun Jung, Seokhwan Kim, and Gary Geunbae Lee. 2009. Example-based dialog modeling for practical multi-domain dialog system. Speech Communication 51(5):466–484. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 2000. A stochastic model of human-machine interaction for learning dialogue strategies. IEEE Trans on Speech and Audio Processing 8(1):11–23. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proc HLT-NAACL, San Diego, California, USA. 673 Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proc Association for Computational Linguistics, Berlin, Germany. Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016c. Deep reinforcement learning for dialogue generation. In Proc Conference on Empirical Methods in Natural Language Processing, Austin, Texas, USA. Lihong Li, He He, and Jason D. Williams. 2014. Temporal supervised learning for inferring a dialog policy from example conversations. In Proc IEEE Workshop on Spoken Language Technologies (SLT), South Lake Tahoe, Nevada, USA. Fei Liu and Julien Perez. 2016. Gated end-toend memory networks. CoRR abs/1610.04211. http://arxiv.org/abs/1610.04211. Ryan Thomas Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, and Joelle Pineau. 2017. Training end-to-end dialogue systems with the ubuntu dialogue corpus. Dialogue and Discourse 8(1). Yi Luan, Yangfeng Ji, and Mari Ostendorf. 2016. LSTM based conversation models. CoRR abs/1603.09457. http://arxiv.org/abs/1603.09457. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. Coherent dialogue with attentionbased language models. CoRR abs/1611.06997. http://arxiv.org/abs/1611.06997. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proc Advances in Neural Information Processing Systems, Lake Tahoe, USA. pages 3111– 3119. Min Joon Seo, Hannaneh Hajishirzi, and Ali Farhadi. 2016. Query-regression networks for machine comprehension. CoRR abs/1606.04582. http://arxiv.org/abs/1606.04582. Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI’16, pages 3776–3783. http://dl.acm.org/citation.cfm?id=3016387.3016435. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. Lifeng Shang, Zhengdong Lu, , and Hang Li. 2015. Neural responding machine for short-text conversation. In Proc Association for Computational Linguistics, Beijing, China. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529(7587):484–489. Satinder Singh, Diane J Litman, Michael Kearns, and Marilyn A Walker. 2002. Optimizing dialogue management with reinforcement leaning: experiments with the NJFun system. Journal of Artificial Intelligence 16:105–133. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Chris Manning, Andrew Ng, and Chris Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc Conference on Empirical Methods in Natural Language Processing, Seattle, Washington, USA. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proc HLT-NAACL, Denver, Colorado, USA. Pei-Hao Su, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Lina RojasBarahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. 2016. Continuously learning neural dialogue management. In arXiv preprint: 1606.02689. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Proc Advances in Neural Information Processing Systems (NIPS), Montreal, Canada. Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688. http://arxiv.org/abs/1605.02688. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proc ICML Deep Learning Workshop. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve J. Young. 2016. A network-based end-to-end trainable taskoriented dialogue system. CoRR abs/1604.04562. http://arxiv.org/abs/1604.04562. Jason D. Williams. 2008. The best of both worlds: Unifying conventional dialog systems and POMDPs. In Proc Intl Conf on Spoken Language Processing (ICSLP), Brisbane, Australia. Jason D. Williams and Steve Young. 2007. Partially observable Markov decision processes for spoken dialog systems. Computer Speech and Language 21(2):393–422. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning 8(3-4):229–256. 674 Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2016. Incorporating loosestructured knowledge into LSTM with recall gate for conversation modeling. CoRR abs/1605.05110. http://arxiv.org/abs/1605.05110. Kaisheng Yao, Geoffrey Zweig, and Baolin Peng. 2015. Attention with intention for a neural network conversation model. In Proc NIPS workshop on Machine Learning for Spoken Language Understanding and Interaction. Steve Young, Milica Gasic, Blaise Thomson, and Jason D. Williams. 2013. POMDP-based Statistical Spoken Dialogue Systems: a Review. Proceedings of the IEEE PP(99):1–20. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701. http://arxiv.org/abs/1212.5701. A Supplemental Material A.1 Model implementation details The RNN was specified using Keras version 0.3.3, with back-end computation in Theano version 0.8.0.dev0 (Theano Development Team, 2016; Chollet, 2015). The Keras model specification is given below. The input variable obs includes all features from Figure 1 step 6 except for the previous action (step 18) and the action mask (step 6, top-most vector). # Given: # obs_size, action_size, nb_hidden g = Graph() g.add_input( name=’obs’, input_shape=(None, obs_size) ) g.add_input( name=’prev_action’, input_shape=(None, action_size) ) g.add_input( name=’avail_actions’, input_shape=(None, action_size) ) g.add_node( LSTM( n_hidden, return_sequences=True, activation=’tanh’, ), name=’h1’, inputs=[ ’obs’, ’prev_action’, ’avail_actions’ ] ) g.add_node( TimeDistributedDense( action_size, activation=’softmax’, ), name=’h2’, input=’h1’ ) g.add_node( Activation( activation=normalize, ), name=’action’, inputs=[’h2’,’avail_actions’], merge_mode=’mul’, create_output=True ) g.compile( optimizer=Adadelta(clipnorm=1.), sample_weight_modes={ ’action’: ’temporal’ }, loss={ ’action’:’categorical_crossentropy’ } ) Model sizes are given in Table 3. Example dialogs are given below for each of the 5 dialog systems. For space and readability, the entity tags that appear in the user and system sides of the dialogs have been removed – for example, Call <name>Joan</name> is shown as Call Joan. A.2 bAbI Task5 example dialog good morning hello what can i help you with today i’d like to book a table with italian food i’m on it [silence] where should it be in paris how many people would be in your party for six people please which price range are looking for in a cheap price range please ok let me look into some options for you [silence] api_call italian paris six cheap [database results returned] [silence] what do you think of this option: resto madrid cheap spanish 8stars no this does not work for me sure let me find an other option for you [silence] what do you think of this option: resto madrid cheap spanish 6stars do you have something else sure let me find an other option for you [silence] what do you think of this option: resto madrid cheap spanish 1stars it’s perfect great let me do the reservation may i have the phone number of the restaurant here it is resto madrid cheap spanish 1stars phone 675 bAbI Task5 bAbI Task6 Reset password Account access Dialer Utterance embed. 300 300 300 300 0 Bag of words 85 523 1124 278 0 Context features 4 14 0 0 17 obs_size 389 837 1424 578 17 action_size 16 58 7 16 14 nb_hidden 128 128 128 128 32 Table 3: Dimensions of the 5 HCNs in this paper. thanks is there anything i can help you with no thank you you’re welcome A.3 bAbI Task6 example dialog In the system turn marked [*], the system provided venue information without first making an API call. Hello , welcome to the Cambridge restaurant system . You can ask for restaurants by area , price range or food type . How may I help you ? im looking for canapes food in any price range I am sorry but there is no canapes restaurant that matches your request [*] how about european api_call european R_location R_price [database results returned] [silence] michaelhouse cafe is a great restaurant serving european food and it is in the expensive price range food an What part of town do you have in mind? any any part of town michaelhouse cafe serves european food . may i have the address and phone number please The phone number of michaelhouse cafe is michaelhouse cafe phone thank you good bye you are welcome A.4 Task5 and Task6 context features Context features used in Section 4 are in Table 4. A.5 Forgot password example dialog Some system actions have been shortened for space. How do I recover my password Okay, you don’t need to remember your password, we can reset it. Would you like to try that? Yes To reset your password, please see this URL ... Was that helpful? I don’t have a flashdrive Let’s connect you to a person who can help you A.6 Account access example dialog Some system actions have been shortened for space. cannot sign in to my email Please check caps lock is off, and that your email is spelled correctly. Are you able to access your account now? no If you still can’t sign in, please go the the following page and check that the email server is running ok... Is everything up and running? yes You need to reset your password - here’s how ... Did this solve your problem? still cannot get in Did you receive the security code? yes Check you entered the right numbers. Did this solve your problem? no Have you seen a message that your profile is temporarily blocked? just your message of update on sept 15 Did you turn on two-step verification recently? whats that? Sorry, I can’t help. Rephrase your question, or click below to talk to a person. A.7 Name dialing example dialogs How can I help you? Call Joe There’s more than one person named Joe. Can you say their full name? Call Joe Adamson SavePhonetypeavail() Calling Joe Adamson, work PlaceCall() How can I help you? Call Sally on her home 676 Feature bAbI Task5 bAbI Task6 Presence of each entity in dialog state 4 3 Presence of each entity in current utterance – 3 Whether DB has been queried yet – 1 Whether DB results are empty – 1 Whether DB results are non-empty – 1 Whether any DB results have been presented – 1 Whether all DB results have been presented – 1 Whether any DB results are available to present – 1 Whether curent query yields no results in training set – 1 Whether cuisine in current query yields no results in training set – 1 Total context features 4 14 Table 4: Binary context features used to convey entity and database state in Section 4. Sorry, I don’t have a home number for Sally Smith. I only have a work phone. Do you want to call that number? no Oh, sorry about that. Goodbye. 677
2017
62
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 678–687 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1063 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 678–687 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1063 Generating Contrastive Referring Expressions Mart´ın Villalba and Christoph Teichmann and Alexander Koller Department of Language Science and Technology Saarland University, Germany {villalba|cteichmann|koller}@coli.uni-saarland.de Abstract The referring expressions (REs) produced by a natural language generation (NLG) system can be misunderstood by the hearer, even when they are semantically correct. In an interactive setting, the NLG system can try to recognize such misunderstandings and correct them. We present an algorithm for generating corrective REs that use contrastive focus (“no, the BLUE button”) to emphasize the information the hearer most likely misunderstood. We show empirically that these contrastive REs are preferred over REs without contrast marking. 1 Introduction Interactive natural language generation (NLG) systems face the task of detecting when they have been misunderstood, and reacting appropriately to fix the problem. For instance, even when the system generated a semantically correct referring expression (RE), the user may still misunderstand it, i.e. resolve it to a different object from the one the system intended. In an interactive setting, such as a dialogue system or a pedestrian navigation system, the system can try to detect such misunderstandings – e.g. by predicting what the hearer understood from their behavior (Engonopoulos et al., 2013) – and to produce further utterances which resolve the misunderstanding and get the hearer to identify the intended object after all. When humans correct their own REs, they routinely employ contrastive focus (Rooth, 1992; Krifka, 2008) to clarify the relationship to the original RE. Say that we originally described an object b as “the blue button”, but the hearer approaches a button b′ which is green, thus providing evidence that they misunderstood the RE to mean b′. In this case, we would like to say “no, the BLUE button”, with the contrastive focus realized by an appropriate pitch accent on “BLUE”. This utterance alerts the hearer to the fact that they misunderstood the original RE; it reiterates the information from the original RE; and it marks the attribute “blue” as a salient difference between b′ and the object the original RE was intended to describe. In this paper, we describe an algorithm for generating REs with contrastive focus. We start from the modeling assumption that misunderstandings arise because the RE rs the system uttered was corrupted by a noisy channel into an RE ru which the user “heard” and then resolved correctly; in the example above, we assume the user literally heard “the green button”. We compute this (hypothetical) RE ru as the RE which refers to b′ and has the lowest edit distance from rs. Based on this, we mark the contrastive words in rs, i.e. we transform “the blue button” into “the BLUE button”. We evaluate our system empirically on REs from the GIVE Challenge (Koller et al., 2010) and the TUNA Challenge (van der Sluis et al., 2007), and show that the contrastive REs generated by our system are preferred over a number of baselines. The paper is structured as follows. We first review related work in Section 2 and define the problem of generating contrastive REs in Section 3. Section 4 sketches the general architecture for RE generation on which our system is based. In Section 5, we present the corruption model and show how to use it to reconstruct ru. Section 6 describes how we use this information to generate contrastive markup in rs, and in Section 7 we evaluate our approach. 2 Related Work The notion of focus has been extensively studied in the literature on theoretical semantics and prag678 matics, see e.g. Krifka (2008) and Rooth (1997) for overview papers. Krifka follows Rooth (1992) in taking focus as “indicat(ing) the presence of alternatives that are relevant for the interpretation of linguistic expressions”; focus then establishes a contrast between an object and these alternatives. Bornkessel and Schlesewsky (2006) find that corrective focus can even override syntactic requirements, on the basis of “its extraordinarily high communicative saliency”. This literature is purely theoretical; we offer an algorithm for automatically generating contrastive focus. In speech, focus is typically marked through intonation and pitch accents (Levelt, 1993; Pierrehumbert and Hirschberg, 1990; Steube, 2001), while concepts that can be taken for granted are deaccented and/or deleted. Developing systems which realize precise pitch contours for focus in text-to-speech settings is an ongoing research effort. We therefore realize focus in written language in this paper, by capitalizing the focused word. We also experiment with deletion of background words. There is substantial previous work on interactive systems that detect and respond to misunderstandings. Misu et al. (2014) present an error analysis of an in-car dialogue system which shows that more than half the errors can only be resolved through further clarification dialogues, as opposed to better sensors and/or databases; that is, by improved handling of misunderstandings. Engonopoulos et al. (2013) detect misunderstandings of REs in interactive NLG through the use of a statistical model. Their model also predicts the object to which a misunderstood RE was incorrectly resolved. Moving from misunderstanding detection to error correction, Zarrieß and Schlangen (2016) present an interactive NLG algorithm which is capable of referring in installments, in that it can generate multiple REs that are designed to correct misunderstandings of earlier REs to the same object. The interactive NLG system developed by Akkersdijk et al. (2011) generates both reflective and anticipative feedback based on what a user does and sees. Their error detection and correction strategy distinguishes a fixed set of possible situations where feedback is necessary, and defines custom, hard-coded RE generation sub-strategies for each one. None of these systems generate REs marked for focus. We are aware of two items of previous work that address the generation of contrastive REs directly. Milosavljevic and Dale (1996) outline strategies for generating clarificatory comparisons in encyclopedic descriptions. Their surface realizer can generate contrastive REs, but the attributes that receive contrastive focus have to be specified by hand. Krahmer and Theune (2002) extend the Incremental Algorithm (Dale and Reiter, 1995) so it can mark attributes as contrastive. This is a fully automatic algorithm for contrastive REs, but it inherits all the limitations of the Incremental Algorithm, such as its reliance on a fixed attribute order. Neither of these two approaches evaluates the quality of the contrastive REs it generates. Finally, some work has addressed the issue of generating texts that realize the discourse relation contrast. For instance, Howcroft et al. (2013) show how to choose contrastive discourse connectives (but, while, ...) when generating restaurant descriptions, thus increasing human ratings for naturalness. Unlike their work, the research presented in this paper is not about discourse relations, but about assigning focus in contrastive REs. 3 Interactive NLG We start by introducing the problem of generating corrective REs in an interactive NLG setting. We use examples from the GIVE Challenge (Koller et al., 2010) throughout the paper; however, the algorithm itself is domain-independent. GIVE is a shared task in which an NLG system (the instruction giver, IG) must guide a human user (the instruction follower, IF) through a virtual 3D environment. The IF needs to open a safe and steal a trophy by clicking on a number of buttons in the right order without triggering alarms. The job of the NLG system is to generate natural-language instructions which guide the IF to complete this task successfully. The generation of REs has a central place in the GIVE Challenge because the system frequently needs to identify buttons in the virtual environment to the IF. Figure 1 shows a screenshot of a GIVE game in progress; here b1 and b4 are blue buttons, b2 and b3 are yellow buttons, and w1 is a window. If the next button the IF needs to press is b4 – the intended object, os – then one good RE for b4 would be “the blue button below the window”, and the system should utter: (1) Press the blue button below the window. After uttering this sentence, the system can 679 Figure 1: Example scene from the GIVE Challenge. track the IF’s behavior to see whether the IF has understood the RE correctly. If the wrong button is pressed, or if a model of IF’s behavior suggests that they are about to press the wrong button (Engonopoulos et al., 2013), the original RE has been misunderstood. However, the system still gets a second chance, since it can utter a corrective RE, with the goal of identifying b4 to the IF after all. Examples include simply repeating the original RE, or generating a completely new RE from scratch. The system can also explicitly take into account which part of the original RE the IF misunderstood. If it has reason to believe that the IF resolved the RE to b3, it could say: (2) No, the BLUE button below the window. This use of contrastive focus distinguishes the attributes the IF misunderstood (blue) from those that they understood correctly (below the window), and thus makes it easier for the IF to resolve the misunderstanding. In speech, contrastive focus would be realized with a pitch accent; we approximate this accent in written language by capitalizing the focused word. We call an RE that uses contrastive focus to highlight the difference between the misunderstood and the intended object, a contrastive RE. The aim of this paper is to present an algorithm for computing contrastive REs. 4 Generating Referring Expressions While we make no assumptions on how the original RE rs was generated, our algorithm for reconstructing the corrupted RE ru requires an RE generation algorithm that can represent all semantically correct REs for a given object compactly in a chart. Here we sketch the RE generation of Engonopoulos and Koller (2014), which satisfies this requirement. NPb4,{b4} Nb4,{b4} PPb4,{b3,b4} NPw1,{w1} Nw1,{w1} window Dw1, the Pb4,below below Nb4,{b1,b4} Nb4,{b1,b2,b3,b4} button ADJb4,{b1,b4} blue Db4, the Figure 2: Example syntax tree for an RE for b4. This algorithm assumes a synchronous grammar which relates strings with the sets of objects they refer to. Strings and their referent sets are constructed in parallel from lexicon entries and grammar rules; each grammar rule specifies how the referent set of the parent is determined from those of the children. For the scene in Figure 1, we assume lexicon entries which express, among other things, that the word “blue” denotes the set {b1, b4} and the word “below” denotes the relation {(w1, b1), (w1, b2), (b3, w1), (b4, w1)}. We combine these lexicons entries using rules such as “N →button() |button |{b1, b2, b3, b4}” which generates the string “button” and associates it with the set of all buttons or “N →N1(N,PP) |w1 • w2 |R1 ∩R2” which states that a phrase of type noun can be combined with a prepositional phrase and their denotations will be intersected. Using these rules we can determine that “the window” denotes {w1}, that “below the window” can refer to {b3, b4} and that “blue button below the window” uniquely refers to {b4}. The syntax tree in Fig. 2 represents a complete derivation of an RE for {b4}. The algorithm of Engonopoulos and Koller computes a chart which represents the set of all possible REs for a given set of input objects, such as {b4}, according to the grammar. This is done by building a chart containing all derivations of the grammar which correspond to the desired set. They represent this chart as a finite tree automaton (Comon et al., 2007). Here we simply write the chart as a Context-Free Grammar. The strings produced by this Context-Free Grammar are then exactly the REs for the intended object. For example, the syntax tree in Fig. 2 is generated by the parse chart for the set {b4}. Its nonterminal symbols consist of three parts: a syntactic category 680 intended object: os referring expression: rs heard referring expression: ru user resolved object: ou Contrastive RE b4 b2 the blue button below the window the yellow button above the window Instruction Giver (IG) Corruption Instruction Follower (IF) Figure 3: The corruption model. (given by the synchronous grammar), the referent for which an RE is currently being constructed, and the set of objects to which the entire subtree refers. The grammar may include recursion and therefore allow for an infinite set of possible REs. If it is weighted, one can use the Viterbi algorithm to compute the best RE from the chart. 5 Listener Hypotheses and Edit Distance 5.1 Corruption model Now let us say that the system has generated and uttered an RE rs with the intention of referring to the object os, but it has then found that the IF has misunderstood the RE and resolved it to another object, ou (see Fig. 3). We assume for the purposes of this paper that such a misunderstanding arises because rs was corrupted by a noisy channel when it was transmitted to the IF, and the IF “heard” a different RE, ru. We further assume that the IF then resolved ru correctly, i.e. the corruption in the transmission is the only source of misunderstandings. In reality, there are of course many other reasons why the IF might misunderstand rs, such as lack of attention, discrepancies in the lexicon or the world model of the IG and IF, and so on. We make a simplifying assumption in order to make the misunderstanding explicit at the level of the RE strings, while still permitting meaningful corrections for a large class of misunderstandings. An NLG system that builds upon this idea in order to generate a corrective RE has access to the values of os, rs and ou; but it needs to infer the most likely corrupted RE ru. To do this, we model the corruption using the edit operations used for the familiar Levenshtein edit distance (Mohri, 2003) over the alphabet Σ: Sa, substitution of a word with a symbol a ∈Σ; D, deletion of a word; Ia, insertion of the symbol a ∈Σ; or K, keeping the word. The noisy channel passes over each word in rs and applies either D, K or one of the S operations to it. It may also apply I operations before or after a word. We call any sequence s of edit operations that could apply to rs an edit sequence for rs. An example for an edit sequence which corrupts rs = “the blue button below the window” into ru = “the yellow button above the window” is shown in Figure 4. The same ru could also have been generated by the edit operation sequence K Syellow K Sabove K K, and there is generally a large number of edit sequences that could transform between any two REs. If an edit sequence s maps x to y, we write apply(s, x) = y. We can now define a probability distribution P(s | rs) over edit sequences s that the noisy channel might apply to the string rs, as follows: P(s | rs) = 1 Z Y si∈s exp(−c(si)), where c(si) is a cost for using the edit operation si. We set c(K) = 0, and for any a in our alphabet we set c(Sa) = c(Ia) = c(D) = C, for some fixed C > 0. Z is a normalizing constant which is independent of s and ensures that the probabilities sum to 1. It is finite for sufficiently high values of C, because no sequence for rs can ever contain more K, S and D operations than there are words in rs, and the total weight of sequences generated by adding more and more I operations will converge. Finally, let L be the set of referring expressions that the IF would resolve to ou, i.e. the set of candidates for ru. Then the most probable edit sequence for rs which generates an ru ∈L is given by s∗ = arg max s : apply(s,rs)∈L P(s | rs) = arg mins P si∈s c(si), i.e. s∗is the edit sequence that maps rs to an RE in L with minimal cost. We will assume that s∗is the edit sequence that corrupted rs, i.e. that ru = apply(s∗, rs). 5.2 Finding the most likely corruption It remains to compute s∗; we will then show in Section 6 how it can be used to generate a corrective RE. Attempting to find s∗by enumeration is impractical, as the set of edit sequences for a given rs and ru may be large and the set of possible ru for a given ou may be infinite. Instead 681 rs the blue button below the window edit operation sequence K D Iyellow K Sabove K K ru the yellow button above the window Figure 4: Example edit sequence for a given corruption. we will use the algorithm from Section 4 to compute a chart for all the possible REs for ou, represented as a context-free grammar G whose language L = L(G) consists of these REs. We will then intersect it with a finite-state automaton which keeps track of the edit costs, obtaining a second context-free grammar G′. These operations can be performed efficiently, and s∗can be read off of the minimum-cost syntax tree of G′. Edit automaton. The possible edit sequences for a given rs can be represented compactly in the form of a weighted finite-state automaton F(rs) (Mohri, 2003). Each run of the automaton on a string w corresponds to a specific edit sequence that transforms rs into w, and the sum of transition weights of the run is the cost of that edit sequence. We call F(rs) the edit automaton. It has a state qi for every position i in rs; the start state is q0 and the final state is q|rs|. For each i, it has a “keep” transition from qi to qi+1 that reads the word at position i with cost 0. In addition, there are transitions from qi to qi+1 with cost C that read any symbol in Σ (for substitution) and ones that read the empty string ϵ (for deletion). Finally, there is a loop with cost C from each qi to itself and for any symbol in Σ, implementing insertion. An example automaton for rs = “the blue button below the window” is shown in Figure 5. The transitions are written in the form ⟨word in w : associated cost⟩. Note that every path through the edit transducer corresponds to a specific edit sequence s, and the sum of the costs along the path corresponds to −log P(s | rs) −log Z. Combining G and F(rs). Now we can combine G with F(rs) to obtain G′, by intersecting them using the Bar-Hillel construction (Bar-Hillel et al., 1961; Hopcroft and Ullman, 1979). For the purposes of our presentation we assume that G is in Chomsky Normal Form, i.e. all rules have the form A →a, where a is a word, or A →B C, where both symbols on the right hand side are nonterminals. The resulting grammar G′ uses nonterminal symbols of the form Nb,A,⟨qi,qk⟩, where b, A are as in Section 4, and qi, qk indicate that the string derived by this nonterminal was generated by editing the substring of rs from position i to k. Let Nb,A →a be a production rule of G with a word a on the right-hand side; as explained above, b is the object to which the subtree should refer, and A is the set of objects to which the subtree actually might refer. Let t = qi →⟨a:c⟩qk be a transition in F(rs), where q, q′ are states of F(rs) and c is the edit cost. From these two, we create a context-free rule Nb,A,⟨qi,qk⟩→a with weight c and add it to G′. If k = i + 1, these rules represent K and S operations; if k = i, they represent insertions. Now let Nb,A →Xb1,A1 Yb2,A2 be a binary rule in G, and let qi, qj, qk be states of F(rs) with i ≤j ≤k. We then add a rule Nb,A,⟨qi,qk⟩→ Xb1,A1,⟨qi,qj⟩Yb2,A2,⟨qj,qk⟩to G′. These rules are assigned weight 0, as they only combine words according to the grammar structure of G and do not encode any edit operations. Finally, we deal with deletion. Let Nb,A be a nonterminal symbol in G and let qh, qi, qj, qk be states of F(rs) with h ≤i ≤j ≤k. We then add a rule Nb,A,⟨qh,qk⟩→Nb,A,⟨qi,qj⟩to G′. This rule deletes the substrings from positions h to i and j to k from rs; thus we assign it the cost ((i − h) + (k −j))C, i.e. the cost of the corresponding ϵ transitions. If the start symbol of G is Sb,A, then the start symbol of G′ is Sb,A,⟨q0,q|rs|⟩. This construction intersects the languages of G and F(rs), but because F(rs) accepts all strings over the alphabet, the languages of G′ and G will be the same (namely, all REs for ou). However, the weights in G′ are inherited from F(rs); thus the weight of each RE in L(G′) is the edit cost from rs. Example. Fig. 6 shows an example tree for the G′ we obtain from the automaton in Fig. 5. We can read the string w = “the yellow button above the window” off of the leaves; by construction, this is an RE for ou. Furthermore, we can reconstruct the edit sequence that maps from rs to w from the rules of G′ that 682 q0 start q1 q2 q3 q4 q5 q6 the:0 Σ:C ϵ:C Σ:C blue:0 Σ:C ϵ:C Σ:C button:0 Σ:C ϵ:C Σ:C below:0 Σ:C ϵ:C Σ:C the:0 Σ:C ϵ:C Σ:C window:0 Σ:C Σ:C ϵ:C Σ:C Figure 5: Edit automaton F(rs) for rs = “the blue button below the window”. Tree NPb2,{b2}, ⟨q0, q6⟩ Nb2,{b2},⟨q1,q6⟩ PPb2,{b1,b2},⟨q3,q6⟩ NPw1,{w1},⟨q4,q6⟩ Nw1,{w1},⟨q5,q6⟩ window Dw1, ,⟨q4,q5⟩ the Pb2,above,⟨q3,q4⟩ above Nb2,{b2,b3},⟨q1,q3⟩ Nb2,{b2,b3},⟨q2,q3⟩ Nb2,{b1,b2,b3,b4},⟨q2,q3⟩ button ADJb2,{b2,b3},⟨q2,q2⟩ yellow Db2, ,⟨q0,q1⟩ the s K D Iyellow K Sabove K K Emphasis No, press the BLUE button BELOW the window Figure 6: A syntax tree described by G′, together with its associated edit sequence and contrastive RE. were used to derive w. We can see that “yellow” was created by an insertion because the two states of F(rs) in the preterminal symbol just above it are the same. If the two states are different, then the word was either substituted (“above”, if the rule had weight C) or kept (“the”, if the rule had weight 0). By contrast, unary rules indicate deletions, in that they make “progress” in rs without adding new words to w. We can compute the minimal-cost tree of G′ using the Viterbi algorithm. Thus, to summarize, we can calculate s∗from the intersection of a contextfree grammar G representing the REs to ou with the automaton F(rs) representing the edit distance to rs. From this, we obtain ru = apply(s∗, rs). This is efficient in practice. 6 Generating Contrastive REs 6.1 Contrastive focus We are now ready to generate a contrastive RE from rs and s∗. We assign focus to the words in rs which were changed by the corruption – that is, the ones to which s∗applied Substitute or Delete operations. For instance, the edit sequence in Fig. 6 deleted “blue” and substituted “below” with “above”. Thus, we mark these words with focus, and obtain the contrastive RE “the BLUE button BELOW the window”. We call this strategy Emphasis, and write rsE for the RE obtained by applying the Emphasis strategy to the RE rs. 6.2 Shortening We also investigate a second strategy, which generates more succinct contrastive REs than the Emphasis strategy. Most research on RE generation (e.g. Dale and Reiter (1995)) has assumed that hearers should prefer succinct REs, which in particular do not violate the Maxim of Quantity (Grice, 1975). When we utter a contrastive RE, the user has previously heard the RE rs, so some of the information in rsE is redundant. Thus we might obtain a more succinct, and possibly better, RE by dropping such redundant information from the RE. For the grammars we consider here, rsE often combines an NP and a PP, e.g. “[blue button]NP [below the window]PP ”. If errors occur only in one of these constituents, then it might be sufficient to generate a contrastive RE using only that constituent. We call this strategy Shortening and define it as follows. If all the words that are emphasized in rsE are in the NP, the Shortening RE is “the” plus the NP, with emphasis as in rsE. So if rs is “the [blue button] [above the window]” and s∗= K Syellow K K K K, corresponding to a rsE of “the [BLUE button] [above the window]”, then the RE would be “the [BLUE button]”. If all the emphasis in rsE is in the PP, we use 683 We wanted our player to select this button: So we told them: press the red button to the right of the blue button. But they selected this button instead: Which correction is better for this scene? ◦No, press the red BUTTON to the right of the BLUE BUTTON ◦No, press the red button to the RIGHT of the blue button Figure 7: A sample scene from Experiment 1. “the one” plus the PP and again capitalize as in rsE. So if we have s∗= K K K Sbelow K K, where rsE is “the [blue button] [ABOVE the window]”, we obtain “the one [ABOVE the window].” If there is no PP or if rsE emphasizes words in both the NP and the PP, then we just use rsE. 7 Evaluation To test whether our algorithm for contrastive REs assigns contrastive focus correctly, we evaluated it against several baselines in crowdsourced pairwise comparison overhearer experiments. Like Buß et al. (2010), we opted for an overhearer experiment to focus our evaluation on the effects of contrastive feedback, as opposed to the challenges presented by the navigational and timing aspects of a fully interactive system. 7.1 Domains and stimuli We created the stimuli for our experiments from two different domains. We performed a first experiment with scenes from the GIVE Challenge, while a second experiment replaced these scenes with stimuli from the “People” domain of the TUNA Reference Corpus (van der Sluis et al., 2007). This corpus consists of photographs of men annotated with nine attributes, such as whether the We wanted our player to select the person circled in green: So we told them: the light haired old man in a suit looking straight. But they selected the person circled in red instead. Which correction is better for this scene? ◦No, the light haired old man IN A SUIT LOOKING STRAIGHT ◦No, the LIGHT HAIRED OLD man in a suit looking straight Figure 8: A sample scene from Experiment 2. person has a beard, a tie, or is looking straight. Six of these attributes were included in the corpus to better reflect human RE generation strategies. Many human-generated REs in the corpus are overspecific, in that they contain attributes that are not necessary to make the RE semantically unique. We chose the GIVE environment in order to test REs referring both to attributes of an object, i.e. color, and to its spatial relation to other visible objects in the scene. The TUNA Corpus was chosen as a more challenging domain, due to the greater number of available properties for each object on a scene. Each experimental subject was presented with screenshots containing a marked object and an RE. Subjects were told that we had previously referred to the marked object with the given RE, but an (imaginary) player misunderstood this RE and selected a different object, shown in a second screenshot. They were then asked to select which one of two corrections they considered better, where “better” was intentionally left unspecific. Figs. 7 and 8 show examples for each domain. The full set of stimuli is available as supplementary material. To maintain annotation quality in our crowdsourcing setting, we designed test items with a 684 clearly incorrect answer, such as REs referring to the wrong target or a nonexistent one. These test items were randomly interspersed with the real stimuli, and only subjects with a perfect score on the test items were taken into account. Experimental subjects were asked to rate up to 12 comparisons, shown in groups of 3 scenes at a time, and were automatically disqualified if they evaluated any individual scene in less than 10 seconds. The order in which the pairs of strategies were shown was randomized, to avoid effects related to the order in which they were presented on screen. 7.2 Experiment 1 Our first experiment tested four strategies against each other. Each experimental subject was presented with two screenshots of 3D scenes with a marked object and an RE (see Fig. 7 for an example). Each subject was shown a total of 12 scenes, selected at random from 16 test scenes. We collected 10 judgments for each possible combination of GIVE scene and pair of strategies, yielding a total of 943 judgements from 142 subjects after removing fake answers. We compared the Emphasis and Shortening strategies from Section 6 against two baselines. The Repeat strategy simply presented rs as a “contrastive” RE, without any capitalization. Comparisons to Repeat test the hypothesis that subjects prefer explicit contrastive focus. The Random strategy randomly capitalized adjectives, adverbs, and/or prepositions that were not capitalized by the Emphasis strategy. Comparisons to Random verify that any preference for Emphasis is not only due to the presence of contrastive focus, but also because our method identifies precisely where that focus should be. Table 1a shows the results of all pairwise comparisons. For each row strategy StratR and each column strategy StratC, the table value corresponds to (#StratR pref. over StratC)−(#StratC pref. over StratR) (# tests between StratR and StratC) Significance levels are taken from a two-tailed binomial test over the counts of preferences for each strategy. We find a significant preference for the Emphasis strategy over all others, providing evidence that our algorithm assigns contrastive focus to the right words in the corrective RE. While the Shortening strategy is numerically preferred over both baselines, the difference is not significant, and it is significantly worse than the Emphasis strategy. This is surprising, given our initial assumption that listeners prefer succinct REs. It is possible that a different strategy for shortening contrastive REs would work better; this bears further study. 7.3 Experiment 2 In our second experiment, we paired the Emphasis, Repeat, and Random strategies against each other, this time evaluating each strategy in the TUNA people domain. Due to its poor performance in Experiment 1, which was confirmed in pilot experiments for Experiment 2, the Shortening strategy was not included. The experimental setup for the TUNA domain used 3x4 grids of pictures of people chosen at random from the TUNA Challenge, as shown in Fig. 8. We generated 8 such grids, along with REs ranging from two to five attributes and requiring one or two attributes to establish the correct contrast. The larger visual size of objects in the the TUNA scenes allowed us to mark both os and ou in a single picture without excessive clutter. The REs for Experiment 2 were designed to only include attributes from the referred objects, but no information about its position in relation to other objects. The benefit is twofold: we avoid taxing our subjects’ memory with extremely long REs, and we ensure that the overall length of the second set of REs is comparable to those in the previous experiment. We obtained 240 judgements from 65 subjects (after removing fake answers). Table 1b shows the results of all pairwise comparisons. We find that even in the presence of a larger number of attributes, our algorithm assigns contrastive focus to the correct words of the RE. 7.4 Discussion Our experiments confirm that the strategy for computing contrastive REs presented in this paper works in practice. This validates the corruption model, which approximates semantic mismatches between what the speaker said and what the listener understood as differences at the level of words in strings. Obviously, this model is still an approximation, and we will test its limits in future work. We find that users generally prefer REs with an emphasis over simple repetitions. In the more challenging scenes of the TUNA corpus, users even have a significant preference of Random over 685 Repeat Random Emphasis Shortening Repeat – 0.041 -0.570*** -0.141 Random -0.041 – -0.600*** -0.109 Emphasis 0.570*** 0.600*** – 0.376*** Shortening 0.141 0.109 -0.376*** – (a) Results for Experiment 1 Repeat Random Emphasis Repeat – -0.425*** -0.575*** Random 0.425*** – -0.425*** Emphasis 0.575*** 0.425*** – (b) Results for Experiment 2 Table 1: Pairwise comparisons between feedback strategies for experiments 1 and 2. A positive value shows preference for the row strategy, significant at *** p < 0.001. Repeat, although this makes no semantic sense. This preference may be due to the fact that emphasizing anything at least publically acknowledges the presence of a misunderstanding that requires correction. It will be interesting to explore whether this preference holds up in an interactive setting, rather than an overhearer experiment, where listeners will have to act upon the corrective REs. The poor performance of the Shortening strategy is a surprising negative result. We would expect a shorter RE to always be preferred, following the Gricean Maxim of Quantity (Grice, 1975). This may because our particular Shortening strategy can be improved, or it may be because listeners interpret the shortened REs not with respect to the original instructions, but rather with respect to a “refreshed” context (as observed, for instance, in Gotzner et al. (2016)). In this case the shortened REs would not be unique with respect to the refreshed, wider context. 8 Conclusion In this paper, we have presented an algorithm for generating contrastive feedback for a hearer who has misunderstood a referring expression. Our technique is based on modeling likely user misunderstandings and then attempting to give feedback that contrasts with the most probable incorrect understanding. Our experiments show that this technique accurately predicts which words to mark as focused in a contrastive RE. In future work, we will complement the overhearer experiment presented here with an end-toend evaluation in an interactive NLG setting. This will allow us to further investigate the quality of the correction strategies and refine the Shortening strategy. It will also give us the opportunity to investigate empirically the limits of the corruption model. Furthermore, we could use this data to refine the costs c(D), c(Ia) etc. for the edit operations, possibly assigning different costs to different edit operations. Finally, it would be interesting to combine our algorithm with a speech synthesis system. In this way, we will be able to express focus with actual pitch accents, in contrast to the typographic approximation we made here. References Saskia Akkersdijk, Marin Langenbach, Frieder Loch, and Mari¨et Theune. 2011. The thumbs up! twente system for give 2.5. In The 13th European Workshop on Natural Language Generation (ENLG 2011). Yehoshua Bar-Hillel, Micha Perles, and Eli Shamir. 1961. On formal properties of simple phrase structure grammars. Zeitschrift f¨ur Phonetik, Sprachwissenschaft und Kommunikationsforschung 14:143– 172. Ina Bornkessel and Matthias Schlesewsky. 2006. The role of contrast in the local licensing of scrambling in german: Evidence from online comprehension. Journal of Germanic Linguistics 18(01):1–43. Okko Buß, Timo Baumann, and David Schlangen. 2010. Collaborating on utterances with a spoken dialogue system using an isu–based approach to incremental dialogue management. In Proceedings of the Special Interests Group on Discourse and Dialogue Conference (SIGdial 2010). Hubert Comon, Max Dauchet, R´emi Gilleron, Florent Jacquemard, Denis Lugiez, Sophie Tison, Marc Tommasi, and Christof L¨oding. 2007. Tree Automata techniques and applications. published online - http://tata.gforge.inria.fr/. http://tata.gforge.inria.fr/. Robert Dale and Ehud Reiter. 1995. Computational interpretations of the Gricean maxims in the generation of referring expressions. Cognitive Science 19(2):233–263. Nikos Engonopoulos and Alexander Koller. 2014. Generating effective referring expressions using charts. In Proceedings of the INLG and SIGdial 2014 Joint Session. Nikos Engonopoulos, Mart´ın Villalba, Ivan Titov, and Alexander Koller. 2013. Predicting the resolution of referring expressions from user behavior. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013). 686 Nicole Gotzner, Isabell Wartenburger, and Katharina Spalek. 2016. The impact of focus particles on the recognition and rejection of contrastive alternatives. Language and Cognition 8(1):59–95. H. Paul Grice. 1975. Logic and conversation. In P. Cole and J. L. Morgan, editors, Syntax and Semantics: Vol. 3: Speech Acts, Academic Press, pages 41–58. John Edward Hopcroft and Jeffrey Ullman. 1979. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. David Howcroft, Crystal Nakatsu, and Michael White. 2013. Enhancing the expression of contrast in the SPaRKy restaurant corpus. In Proceedings of the 14th European Workshop on Natural Language Generation (ENLG 2013). Alexander Koller, Kristina Striegnitz, Andrew Gargett, Donna Byron, Justine Cassell, Robert Dale, Johanna Moore, and Jon Oberlander. 2010. Report on the Second NLG Challenge on Generating Instructions in Virtual Environments (GIVE-2). In Proceedings of the Sixth International Natural Language Generation Conference (Special session on Generation Challenges). E. Krahmer and M. Theune. 2002. Efficient contextsensitive generation of referring expressions. In K. van Deemter and R. Kibble, editors, Information Sharing: Reference and Presupposition in Language Generation and Interpretation, Center for the Study of Language and Information-Lecture Notes, CSLI Publications, volume 143, pages 223–263. Manfred Krifka. 2008. Basic notions of information structure. Acta Linguistica Hungarica 55:243–276. Willem J.M. Levelt. 1993. Speaking: From Intention to Articulation. MIT University Press Group. Maria Milosavljevic and Robert Dale. 1996. Strategies for comparison in encyclopædia descriptions. In Proceedings of the 8th International Natural Language Generation Workshop (INLG 1996). Teruhisa Misu, Antoine Raux, Rakesh Gupta, and Ian Lane. 2014. Situated language understanding at 25 miles per hour. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGdial 2014). Mehryar Mohri. 2003. Edit-distance of weighted automata: General definitions and algorithms. International Journal of Foundations of Computer Science 14(6):957–982. Janet B. Pierrehumbert and Julia Hirschberg. 1990. The meaning of intonational contours in the interpretation of discourse. In Philip R. Cohen, Jerry Morgan, and Martha E. Pollack, editors, Intentions in Communication, MIT University Press Group, chapter 14. Mats Rooth. 1992. A theory of focus interpretation. Natural Language Semantics 1:75–116. Mats Rooth. 1997. Focus. In Shalom Lappin, editor, The Handbook of Contemporary Semantic Theory, Blackwell Publishing, chapter 10, pages 271–298. Anita Steube. 2001. Correction by contrastive focus. Theoretical Linguistics 27(2-3):215–250. Ielka van der Sluis, Albert Gatt, and Kees van Deemter. 2007. Evaluating algorithms for the generation of referring expressions: Going beyond toy domains. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2007). Sina Zarrieß and David Schlangen. 2016. Easy Things First: Installments Improve Referring Expression Generation for Objects in Photographs. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). 687
2017
63
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 688–697 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1064 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 688–697 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1064 Modeling Source Syntax for Neural Machine Translation Junhui Li† Deyi Xiong† Zhaopeng Tu‡∗ Muhua Zhu‡ Min Zhang† Guodong Zhou† †School of Computer Science and Technology, Soochow University, Suzhou, China {lijunhui, dyxiong, minzhang, gdzhou}@suda.edu.cn ‡Tencent AI Lab, Shenzhen, China [email protected], [email protected] Abstract Even though a linguistics-free sequence to sequence model in neural machine translation (NMT) has certain capability of implicitly learning syntactic information of source sentences, this paper shows that source syntax can be explicitly incorporated into NMT effectively to provide further improvements. Specifically, we linearize parse trees of source sentences to obtain structural label sequences. On the basis, we propose three different sorts of encoders to incorporate source syntax into NMT: 1) Parallel RNN encoder that learns word and label annotation vectors parallelly; 2) Hierarchical RNN encoder that learns word and label annotation vectors in a two-level hierarchy; and 3) Mixed RNN encoder that stitchingly learns word and label annotation vectors over sequences where words and labels are mixed. Experimentation on Chinese-to-English translation demonstrates that all the three proposed syntactic encoders are able to improve translation accuracy. It is interesting to note that the simplest RNN encoder, i.e., Mixed RNN encoder yields the best performance with an significant improvement of 1.4 BLEU points. Moreover, an in-depth analysis from several perspectives is provided to reveal how source syntax benefits NMT. 1 Introduction Recently the sequence to sequence model (seq2seq) in neural machine translation (NMT) has achieved certain success over the state-ofthe-art of statistical machine translation (SMT) ∗Work done at Huawei Noah’s Ark Lab, HongKong. ӳՂ ᦤԻಅ ಢٵ ෛኞ ᱷᤈ ኩ᧗ Ӥ૱ ໜ NP2 NP1 VV tokoyo stock exchange approves new listing bank input: output: reference: tokyo exchange approves shinsei bank 's application for listing (a). An example of discontinuous translation ՜ժ ๶ᛔ م ӻ ਹସ , ٌӾ ӷ ӻ ঀ਎ ဌํ ᆿྮ ̶ NP they came from six families with two girls and two girls . they came from six families and two girls are without parents . (b). An example of over translation input: output: reference: Figure 1: Examples of NMT translation that fail to respect source syntax. on various language pairs (Bahdanau et al., 2015; Jean et al., 2015; Luong et al., 2015; Luong and Manning, 2015). However, Shi et al. (2016) show that the seq2seq model still fails to capture a lot of deep structural details, even though it is capable of learning certain implicit source syntax from sentence-aligned parallel corpus. Moreover, it requires an additional parsing-task-specific training mechanism to recover the hidden syntax in NMT. As a result, in the absence of explicit linguistic knowledge, the seq2seq model in NMT tends to produce translations that fail to well respect syntax. In this paper, we show that syntax can be well exploited in NMT explicitly by taking advantage of source-side syntax to improve the translation accuracy. In principle, syntax is a promising avenue for translation modeling. This has been verified by tremendous encouraging studies on syntaxbased SMT that substantially improves translation by integrating various kinds of syntactic knowledge (Liu et al., 2006; Marton and Resnik, 2008; 688 Shen et al., 2008; Li et al., 2013). While it is yet to be seen how syntax can benefit NMT effectively, we find that translations of NMT sometimes fail to well respect source syntax. Figure 1 (a) shows a Chinese-to-English translation example of NMT. In this example, the NMT seq2seq model incorrectly translates the Chinese noun phrase (i.e., 新 生/xinsheng 银行/yinhang) into a discontinuous phrase in English (i.e., new ... bank) due to the failure of capturing the internal syntactic structure in the input Chinese sentence. Statistics on our development set show that one forth of Chinese noun phrases are translated into discontinuous phrases in English, indicating the substantial disrespect of syntax in NMT translation.1 Figure 1 (b) shows another example with over translation, where the noun phrase 两/liang 个/ge 女孩/nvhai is translated twice in English. Similar to discontinuous translation, over translation usually happens along with the disrespect of syntax which results in the repeated translation of the same source words in multiple positions of the target sentence. In this paper we are not aiming at solving any particular issue, either the discontinuous translation or the over translation. Alternatively, we address how to incorporate explicitly the source syntax to improve the NMT translation accuracy with the expectation of alleviating the issues above in general. Specifically, rather than directly assigning each source word with manually designed syntactic labels, as Sennrich and Haddow (2016) do, we linearize a phrase parse tree into a structural label sequence and let the model automatically learn useful syntactic information. On the basis, we systematically propose and compare several different approaches to incorporating the label sequence into the seq2seq NMT model. Experimentation on Chinese-to-English translation demonstrates that all proposed approaches are able to improve the translation accuracy. 2 Attention-based NMT As a background and a baseline, in this section, we briefly describe the NMT model with an attention mechanism by Bahdanau et al. (2015), which mainly consists of an encoder and a decoder, as shown in Figure 2. Encoder The encoding of a source sentence is for1Manually examining 200 random such discontinuously translated noun phrases, we find that 90% of them should be continuously translated according to the reference translation. h1
 h1 h1
 h1 hm
 hm x1 x2 ….. xm h Atten h si-1 ci RNN MLP yi yi-1 si (a) encoder (b) decoder Figure 2: Attention-based NMT model. mulated using a pair of neural networks, i.e., two recurrent neural networks (denoted bi-RNN): one reads an input sequence x = (x1, ..., xm) from left to right and outputs a forward sequence of hidden states (−→ h1, ..., −→ hm), while the other operates from right to left and outputs a backward sequence (←− h1, ..., ←− hm). Each source word xj is represented as hj (also referred to as word annotation vector): the concatenation of hidden states −→ hj and ←− hj. Such bi-RNN encodes not only the word itself but also its left and right context, which can provide important evidence for its translation. Decoder The decoder is also an RNN that predicts a target sequence y = (y1, ..., yn). Each target word yi is predicted via a multi-layer perceptron (MLP) component which is based on a recurrent hidden state si, the previous predicted word yi−1, and a source-side context vector ci. Here, ci is calculated as a weighted sum over source annotation vectors (h1, ..., hm). The weight vector αi ∈Rm over source annotation vectors is obtained by an attention model, which captures the correspondences between the source and the target languages. The attention weight αij is computed based on the previous recurrent hidden state si−1 and source annotation vector hj. 3 NMT with Source Syntax The conventional NMT models treat a sentence as a sequence of words and ignore external knowledge, failing to effectively capture various kinds of inherent structure of the sentence. To leverage external knowledge, specifically the syntax in the source side, we focus on the parse tree of a sentence and propose three different NMT models that explicitly consider the syntactic structure into encoding. Our purpose is to inform the NMT model the structural context of each word in its corresponding parse tree with the goal that the learned annotation vectors (h1, ..., hm) encode not 689 I love dogs w1 w2 w3 (a) word sequence S NP PRN VP VBP NP NNS I love dogs (b) phrase parse tree S NP PRN VP VBP NP NNS l1 l2 l3 l4 l5 l6 l7 (c) structural label sequence Figure 3: An example of an input sentence (a), its parse tree (b), and the parse tree’s sequential form (c). only the information of words and their surroundings, but also structural context in the parse tree. In the rest of this section, we use English sentences as examples to explain our methods. 3.1 Syntax Representation To obtain the structural context of a word in its parse tree, ideally the model should not only capture and remember the whole parse tree structure, but also discriminate the contexts of any two different words. However, considering the lack of efficient way to directly model structural information, an alternative way is to linearize the phrase parse tree into a sequence of structural labels and learn the structural context through the sequence. For example, Figure 3(c) shows the structural label sequence of Figure 3(b) in a simple way following a depth-first traversal order. Note that linearizing a parse tree in a depth-first traversal order into a sequence of structural labels has also been widely adopted in recent advances in neural syntactic parsing (Vinyals et al., 2015; Choe and Charniak, 2016), suggesting that the linearized sequence can be viewed as an alternative to its tree structure.2 2We have also tried to include the ending brackets in the structural label sequence, as what (Vinyals et al., 2015; Choe hw1
 hw1 hw2
 hw2 hw3
 hw3 I love dogs S NP PRN VP VBP NP NNS hl1
 hl1 …
 … hl7
 hl7 …
 … …
 … …
 … …
 … (a) Parallel RNN encoder + + + word RNN structural label RNN hw1
 hw1 hl3 hl3 hw2
 hw2 hl5 hl5 hw3
 hw3 hl7 hl7 word RNN S NP PRN VP VBP NP NNS hl1
 hl1 …
 … hl7
 hl7 …
 … …
 … …
 … …
 … hw1
 hw1 hw2
 hw2 hw3
 hw3 ew1
 + ew2
 + ew3
 + I Iove dogs structural label 
 RNN (b) Hierarchical RNN encoder Figure 4: The graphical illustration of the Parallel RNN encoder (a) and the Hierarchical RNN encoder (b). Here, −−→ hwj and ←−− hwj are the forward and backward hidden states for word wj, −→ hli and ←− hli are for structural label li, ewj is the word embedding for word wj, and L is for concatenation operator. There is no doubt that the structural label sequence is much longer than its word sequence. In order to obtain the structural label annotation vector for wi in word sequence, we simply look for wi’s part-of-speech (POS) tag in the label sequence and view the tag’s annotation vector as wi’s label annotation vector. This is because wi’s POS tag location can also represent wi’s location in the parse tree. For example, in Figure 3, word w1 in (a) maps to l3 in (c) since l3 is the POS tag of w1. Likewise, w2 maps to l5 and w3 to l7. That is to say, we use l3’s learned annotation vector as w1’s label annotation vector. and Charniak, 2016) do. However, the performance gap is very small by adding the ending brackets or not. 690 3.2 RNN Encoders with Source Syntax In the next, we first propose two different encoders to augment word annotation vector with its corresponding label annotation vector, each of which consists of two RNNs 3: in one encoder, the two RNNs work independently (i.e., Parallel RNN Encoder) while in another encoder the two RNNs work in a hierarchical way (i.e., Hierarchical RNN Encoder). The difference between the two encoders lies in how the two RNNs interact. Then, we propose the third encoder with a single RNN, which learns word and label annotation vectors stitchingly (i.e., Mixed RNN Encoder). Since any of the above three approaches focuses only on the encoder as to generate source annotation vectors along with structural information, we keep the rest part of the NMT models unchanged. Parallel RNN Encoder Figure 4 (a) illustrates our Parallel RNN encoder, which includes two parallel RNNs: i.e., a word RNN and a structural label RNN. On the one hand, the word RNN, as in conventional NMT models, takes a word sequence as input and output a word annotation vector for each word. On the other hand, the structural label RNN takes the structural label sequence of the word sequence as input and obtains a label annotation vector for each label. Besides, we concatenate each word’s word annotation vector and its POS tag’s label annotation vector as the final annotation vector for the word. For example, the final annotation vector for word love in Figure 4 (a) is [−−→ hw2; ←−− hw2; −→ hl5; ←− hl5], where the first two subitems [−−→ hw2; ←−− hw2] are the word annotation vector and the rest two subitems [−→ hl5; ←− hl5] are its POS tag VBP’s label annotation vector. Hierarchical RNN Encoder Partially inspired by the model architecture of GNMT (Wu et al., 2016) which consists of multiple layers of LSTM RNNs, we propose a two-layer model architecture in which the lower layer is the structural label RNN while the upper layer is the word RNN, as shown in Figure 4 (b). We put the word RNN in the upper layer because each item in the word sequence can map into an item in the structural label sequence, while this does not hold if the order of the two RNNs is reversed. As shown in Figure 4 (b), for example, the POS tag VBP’s label annotation vector [−→ hl5, ←− hl5] is concatenated with word 3Hereafter, we simplify bi-RNN as RNN. S NP PRN I VP VBP love NP NNS dogs h1
 h1 h2
 h2 h3
 h3 h4
 h4 h5
 h5 h6
 h6 h7
 h7 h8
 h8 h9
 h9 h10
 h10 Figure 5: The graphical illustration of the Mixed RNN encoder. Here, −→ hj and ←− hj are the forward and backward hidden annotation vectors for j-th item, which can be either a word or a structural label. love’s word embedding ew2 to feed as the input to the word RNN. Mixed RNN Encoder Figure 5 presents our Mixed RNN encoder. Similarly, the sequence of input is the linearization of its parse tree (as in Figure 3 (b)) following a depth-first traversal order, but being mixed with both words and structural labels in a stitching way. It shows that the RNN learns annotation vectors for both the words and the structural labels, though only the annotation vectors of words are further fed to decoding (e.g., ([−→ h4, ←− h4], [−→ h7, ←− h7], [−→ h10, ←− h10])). Even though the annotation vectors of structural labels are not directly fed forward for decoding, the error signal is back propagated along the word sequence and allows the annotation vectors of structural labels being updated accordingly. 3.3 Comparison of RNN Encoders with Source Syntax Though all the three encoders model both word sequence and structural label sequence, the differences lie in their respective model architecture with respect to the degree of coupling the two sequences: • In the Parallel RNN encoder, the word RNN and structural label RNN work in a parallel way. That is to say, the error signal back propagated from the word sequence would not affect the structural label RNN, and vice versa. In contrast, in the Hierarchical RNN encoder, the error signal back propagated from the word sequence has a direct impact on the structural label annotation vectors, and thus on the structural label embeddings. Finally, the Mixed RNN encoder ties the structural label sequence and word sequence together in the closest way. Therefore, the degrees of coupling the word and structural 691 label sequences in these three encoders are like this: Mixed RNN encoder > Hierarchical RNN encoder > Parallel RNN encoder. • Figure 4 and Figure 5 suggest that the Mixed RNN encoder is the simplest. Moreover, comparing to conventional NMT encoders, the difference lies only in the length of the input sequence. Statistics on our training data reveal that the Mixed RNN encoder approximately triples the input sequence length compared to conventional NMT encoders. 4 Experimentation We have presented our approaches to incorporating the source syntax into NMT encoders. In this section, we evaluate their effectiveness on Chinese-to-English translation. 4.1 Experimental Settings Our training data for the translation task consists of 1.25M sentence pairs extracted from LDC corpora, with 27.9M Chinese words and 34.5M English words respectively.4 We choose NIST MT 06 dataset (1664 sentence pairs) as our development set, and NIST MT 02, 03, 04, and 05 datasets (878, 919, 1788 and 1082 sentence pairs, respectively) as our test sets.5 To get the source syntax for sentences on the source-side, we parse the Chinese sentences with Berkeley Parser 6 (Petrov and Klein, 2007) trained on Chinese TreeBank 7.0 (Xue et al., 2005). We use the case insensitive 4-gram NIST BLEU score (Papineni et al., 2002) for the translation task. For efficient training of neural networks, we limit the maximum sentence length on both source and target sides to 50. We also limit both the source and target vocabularies to the most frequent 16K words in Chinese and English, covering approximately 95.8% and 98.2% of the two corpora respectively. All the out-of-vocabulary words are mapped to a special token UNK. Besides, the word embedding dimension is 620 and the size of a hidden layer is 1000. All the other settings are the same as in Bahdanau et al.(2015). 4The corpora include LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06. 5http://www.itl.nist.gov/iad/mig/ tests/mt/ 6https://github.com/slavpetrov/ berkeleyparser The inventory of structural labels includes 16 phrase labels and 32 POS tags. In both our Parallel RNN encoder and Hierarchical RNN encoder, we set the embedding dimension of these labels as 100 and the size of a hidden layer as 100. Besides, the maximum structural label sequence length is set to 100. In our Mixed RNN encoder, since we only have one input sequence, we equally treat the structural labels and words (i.e., a structural label is also initialized with 620 dimension embedding). Compared to the baseline NMT model, the only different setting is that we increase the maximum sentence length on source-side from 50 to 150. We compare our method with two state-of-theart models of SMT and NMT: • cdec (Dyer et al., 2010): an open source hierarchical phrase-based SMT system (Chiang, 2007) with default configuration and a 4-gram language model trained on the target portion of the training data.7 • RNNSearch: a re-implementation of the attentional NMT system (Bahdanau et al., 2015) with slight changes taken from dl4mt tutorial.8 For the activation function f of an RNN, RNNSearch uses the gated recurrent unit (GRU) recently proposed by (Cho et al., 2014b). It incorporates dropout (Hinton et al., 2012) on the output layer and improves the attention model by feeding the lastly generated word. We use AdaDelta (Zeiler, 2012) to optimize model parameters in training with the mini-batch size of 80. For translation, a beam search with size 10 is employed. 4.2 Experiment Results Table 1 shows the translation performances measured in BLEU score. Clearly, all the proposed NMT models with source syntax improve the translation accuracy over all test sets, although there exist considerable differences among different variants. Parameters The three proposed models introduce new parameters in different ways. As a baseline model, RNNSearch has 60.6M parameters. Due to the infrastructure similarity, the Parallel RNN system and the Hierarchical RNN system introduce 7https://github.com/redpony/cdec 8https://github.com/nyu-dl/ dl4mt-tutorial 692 # System #Params Time MT06 MT02 MT03 MT04 MT05 All 1 cdec 33.4 34.8 33.0 35.7 32.1 34.2 2 RNNSearch 60.6M 153m 34.0 36.9 33.7 37.0 34.1 35.6 3 Parallel RNN +1.1M +9m 34.8† 37.8‡ 34.2 38.3‡ 34.6 36.6‡ 4 Hierarchical RNN +1.2M +9m 35.2‡ 37.2 34.7† 38.7‡ 34.7† 36.7‡ 5 Mixed RNN +0 +40m 35.6‡ 37.7† 34.9‡ 38.6‡ 35.5‡ 37.0‡ Table 1: Evaluation of the translation performance. † and ‡: significant over RNNSearch at 0.05/0.01, tested by bootstrap resampling (Koehn, 2004). “+” is the additional number of parameters or training time on the top of the baseline system RNNSearch. Column Time indicates the training time in minutes per epoch for different NMT models the similar size of additional parameters, resulting from the RNN model for structural label sequences (about 0.1M parameters) and catering either the augmented annotation vectors (as shown in Figure 4 (a)) or the augmented word embeddings (as shown in Figure 4 (b)) (the remain parameters). It is not surprising that the Mixed RNN system does not require any additional parameters since though the input sequence becomes longer, we keep the vocabulary size unchanged, resulting in no additional parameters. Speed Introducing the source syntax slightly slows down the training speed. When running on a single GPU GeForce GTX 1080, the baseline model speeds 153 minutes per epoch with 14K updates while the proposed structural label RNNs in both Parallel RNN and Hierarchical RNN systems only increases the training time by about 6% (thanks to the small size of structural label embeddings and annotation vectors), and the Mixed RNN system spends 26% more training time to cater the triple sized input sequence. Comparison with the baseline NMT model (RNNSearch) While all the three proposed NMT models outperform RNNSearch, the Parallel RNN system and the Hierarchical RNN system achieve similar accuracy (e.g., 36.6 v.s. 36.7). Besides, the Mixed RNN system achieves the best accuracy overall test sets with the only exception of NIST MT 02. Over all test sets, it outperforms RNNSearch by 1.4 BLEU points and outperforms the other two improved NMT models by 0.3∼0.4 BLEU points, suggesting the benefits of high degree of coupling the word sequence and the structural label sequence. This is very encouraging since the Mixed RNN encoder is the simplest, without introducing new parameters and with only slight additional training time. Figure 6: Performance of the generated translations with respect to the lengths of the input sentences. Comparison with the SMT model (cdec) Table 1 also shows that all NMT systems outperform the SMT system. This is very consistent with other studies on Chinese-to-English translation (Mi et al., 2016; Tu et al., 2017b; Wang et al., 2017). 5 Analysis As the proposed Mixed RNN system achieves the best performance, we further look at the RNNSearch system and the Mixed RNN system to explore more on how syntactic information helps in translation. 5.1 Effects on Long Sentences Following Bahdanau et al. (2015), we group sentences of similar lengths together and compute BLEU scores. Figure 6 presents the BLEU scores over different lengths of input sentences. It shows that Mixed RNN system outperforms RNNSearch over sentences with all different lengths. It also shows that the performance drops substantially 693 System AER RNNSearch 50.1 Mixed RNN 47.9 Table 2: Evaluation of alignment quality. The lower the score, the better the alignment quality. when the length of input sentences increases. This performance trend over the length is consistent with the findings in (Cho et al., 2014a; Tu et al., 2016, 2017a). We also observe that the NMT systems perform surprisingly bad on sentences over 50 in length, especially compared to the performance of SMT system (i.e., cdec). We think that the bad behavior of NMT systems towards long sentences (e.g., length of 50) is due to the following two reasons: (1) the maximum source sentence length limit is set as 50 in training, 9 making the learned models not ready to translate sentences over the maximum length limit; (2) NMT systems tend to stop early for long input sentences. 5.2 Analysis on Word Alignment Due to the capability of carrying syntactic information in source annotation vectors, we conjecture that our model with source syntax is also beneficial for alignment. To test this hypothesis, we carry out experiments of the word alignment task on the evaluation dataset from Liu and Sun (2015), which contains 900 manually aligned Chinese-English sentence pairs. We force the decoder to output reference translations, as to get automatic alignments between input sentences and their reference translations. To evaluate alignment performance, we report the alignment error rate (AER) (Och and Ney, 2003) in Table 2. Table 2 shows that source syntax information improves the attention model as expected by maintaining an annotation vector summarizing structural information on each source word. 5.3 Analysis on Phrase Alignment The above subsection examines the alignment performance at the word level. In this subsection, we turn to phrase alignment analysis by moving from word unit to phrase unit. Given a source phrase XP, we use word alignments to examine if the phrase is translated continuously (Cont.), or dis9Though the maximum source length limit in Mixed RNN system is set to 150, it approximately contains 50 words in maximum. System XP Cont. Dis. Un. RNNSearch PP 57.3 33.6 9.1 NP 59.8 25.5 14.7 CP 47.3 44.6 8.1 QP 54.0 22.2 23.8 ALL 58.1 27.1 14.8 Mixed RNN PP 63.3 27.5 9.2 NP 63.1 23.1 13.8 CP 54.5 36.6 8.9 QP 56.2 19.7 24.1 ALL 60.4 25.0 14.6 Table 3: Percentages (%) of syntactic phrases in our test sets being translated continuously, discontinuously, or not being translated. Here PP is for prepositional phrase, NP for noun phrase, CP for clause headed by a complementizer, QP for quainter phrase. continuously (Dis.), or if it is not translated at all (Un.). There are some phrases, such as noun phrases (NPs), prepositional phrases (PPs) that we usually expect to have a continuous translation. With respect to several such types of phrases, Table 3 shows how these phrases are translated. From the table, we see that translations of RNNSearch system do not respect source syntax very well. For example, in RNNSearch translations, 57.3%, 33.6%, and 9.1% of PPs are translated continuously, discontinuously, and untranslated, respectively. Fortunately, our Mixed RNN system is able to have more continuous translation for those phrases. Table 3 also suggests that there is still much room for NMT to show more respect to syntax. 5.4 Analysis on Over Translation To estimate the over translation generated by NMT, we propose ratio of over translation (ROT): ROT = P wi t(wi) |w| (1) where |w| is the number of words in consideration, t(wi) is the times of over translation for word wi. Given a word w and its translation e = e1e2 . . . en, we have: t(w) = |e| −|uniq(e)| (2) where |e| is the number of words in w’s translation e, while |uniq(e)| is the number of unique words in e. For example, if a source word 香 694 System POS ROT (%) RNNSearch NR 15.7 CD 7.4 DT 4.9 NN 8.0 ALL 5.5 Mixed RNN NR 12.3 CD 5.1 DT 2.4 NN 6.8 ALL 4.5 Table 4: Ratio of over translation (ROT) on test sets. Here NR is for proper noun, CD for cardinal number, DT for determiner, and NN for nouns except proper nouns and temporal nouns. 港/xiangkang is translated as hong kong hong kong, we say it being over translated 2 times. Table 4 presents ROT grouped by some typical POS tags. It is not surprising that RNNSearch system has high ROT with respect to POS tags of NR (proper noun) and CD (cardinal number): this is due to the fact that the two POS tags include high percentage of unknown words which tend to be translated multiple times in translation. Words of DT (determiner) are another source of over translation since they are usually translated to multiple the in English. It also shows that by introducing source syntax, Mixed RNN system alleviates the over translation issue by 18%: ROT drops from 5.5% to 4.5%. 5.5 Analysis on Rare Word Translation We analyze the translation of source-side rare words that are mapped to a special token UNK. Given a rare word w, we examine if it is translated into a non-UNK word (non-UNK), UNK (UNK), or if it is not translated at all (Un.). Table 5 shows how source-side rare words are translated. The four POS tags listed in the table account for about 90% of all rare words in the test sets. It shows that in Mixed RNN system is more likely to translate source-side rare words into UNK on the target side. This is reasonable since the source side rare words tends to be translated into rare words in the target side. Moreover, it is hard to obtain its correct non-UNK translation when a source-side rare word is replaced as UNK. Note that our approach is compatible with with approaches of open vocabulary. Taking the subSystem POS non-UNK UNK Un. RNNSearch NN 27.2 40.4 32.4 NR 22.9 58.5 18.6 VV 34.5 32.9 32.7 CD 10.7 63.4 25.9 ALL 27.2 40.4 32.4 Mixed RNN NN 24.8 41.4 33.8 NR 17.0 64.5 18.6 VV 33.6 34.0 32.3 CD 9.6 68.7 21.7 ALL 23.9 47.5 28.7 Table 5: Percentages (%) of rare words in our test sets being translated into a non-UNK word (nonUNK), UNK (UNK), or if it is not translated at all (Un.). word approach (Sennrich et al., 2016) as an example, for a word on the source side which is divided into several subword units, we can synthesize subPOS nodes that cover these units. For example, if misunderstand/VB is divided into units of mis and understand, we construct substructure (VB (VB-F mis) (VB-I understand)). 6 Related Work While there has been substantial work on linguistically motivated SMT, approaches that leverage syntax for NMT start to shed light very recently. Generally speaking, NMT can provide a flexible mechanism for adding linguistic knowledge, thanks to its strong capability of automatically learning feature representations. Eriguchi et al. (2016) propose a tree-tosequence model that learns annotation vectors not only for terminal words, but also for non-terminal nodes. They also allow the attention model to align target words to non-terminal nodes. Our approach is similar to theirs by using source-side phrase parse tree. However, our Mixed RNN system, for example, incorporates syntax information by learning annotation vectors of syntactic labels and words stitchingly, but is still a sequenceto-sequence model, with no extra parameters and with less increased training time. Sennrich and Haddow (2016) define a few linguistically motivated features that are attached to each individual words. Their features include lemmas, subword tags, POS tags, dependency labels, etc. They concatenate feature embeddings with word embeddings and feed the concatenated em695 beddings into the NMT encoder. On the contrast, we do not specify any feature, but let the model implicitly learn useful information from the structural label sequence. Shi et al. (2016) design a few experiments to investigate if the NMT system without external linguistic input is capable of learning syntactic information on the source-side as a by-product of training. However, their work is not focusing on improving NMT with linguistic input. Moreover, we analyze what syntax is disrespected in translation from several new perspectives. Garc´ıa-Mart´ınez et al. (2016) generalize NMT outputs as lemmas and morphological factors in order to alleviate the issues of large vocabulary and out-of-vocabulary word translation. The lemmas and corresponding factors are then used to generate final words in target language. Though they use linguistic input on the target side, they are limited to the word level features. Phrase level, or even sentence level linguistic features are harder to obtain for a generation task such as machine translation, since this would require incremental parsing of the hypotheses at test time. 7 Conclusion In this paper, we have investigated whether and how source syntax can explicitly help NMT to improve its translation accuracy. To obtain syntactic knowledge, we linearize a parse tree into a structural label sequence and let the model automatically learn useful information through it. Specifically, we have described three different models to capture the syntax knowledge, i.e., Parallel RNN, Hierarchical RNN, and Mixed RNN. Experimentation on Chinese-to-English translation shows that all proposed models yield improvements over a state-ofthe-art baseline NMT system. It is also interesting to note that the simplest model (i.e., Mixed RNN) achieves the best performance, resulting in obtaining significant improvements of 1.4 BLEU points on NIST MT 02 to 05. In this paper, we have also analyzed the translation behavior of our improved system against the state-of-the-art NMT baseline system from several perspectives. Our analysis shows that there is still much room for NMT translation to be consistent with source syntax. In our future work, we expect several developments that will shed more light on utilizing source syntax, e.g., designing novel syntactic features (e.g., features showing the syntactic role that a word is playing) for NMT, and employing the source syntax to constrain and guild the attention models. Acknowledgments The authors would like to thank three anonymous reviewers for providing helpful comments, and also acknowledge Xing Wang, Xiangyu Duan, Zhengxian Gong for useful discussions. This work was supported by National Natural Science Foundation of China (Grant No. 61525205, 61331011, 61401295). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics 33(2):201–228. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machinetranslation: Encoder-decoder approaches. In Proceedings of SSST 2014. pages 103–111. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of EMNLP 2014. pages 1724–1734. Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of EMNLP 2016. pages 2331–2336. Chris Dyer, Adam Lopez, Juri Ganitkevitch, Jonathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of ACL 2010 System Demonstrations. pages 7–12. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proceedings of ACL 2016. pages 823–833. Mercedes Garc´ıa-Mart´ınez, Loic Barrault, and Fethi Bougares. 2016. Factored neural machine translation. In arXiv:1609.04621. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. 2012. Improving neural networks by 696 preventing co-adaptation of feature detectors. In arXiv:1207.0580. S´ebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. Montreal neural machine translation systems for wmt’15. In Proceedings of WMT 2015. pages 134–140. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP 2004. pages 388–395. Junhui Li, Philip Resnik, and Hal Daum´e III. 2013. Modeling syntactic and semantic structures in hierarchical phrase-based translation. In Proceedings of HLT-NAACL 2013. pages 540–549. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proceedings of ACL-COLING 2006. pages 609–616. Yang Liu and Maosong Sun. 2015. Contrastive unsupervised word alignment with non-local features. In Proceedings of AAAI 2015. pages 857–868. Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domains. In Proceedings of IWSLT 2015. pages 76–79. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP 2015. pages 1412–1421. Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrased-based translation. In Proceedings of ACL-HLT 2008. pages 1003–1011. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. In Proceedings of EMNLP 2016. pages 2283–2288. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics 29(1):19–51. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of ACL 2002. pages 311–318. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLTNAACL 2007. pages 404–411. Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. In Proceedings of the First Conference on Machine Translation. pages 83–91. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of ACL 2016. pages 1715–1725. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of ACL-HLT 2008. pages 577–585. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Proceedings of EMNLP 2016. pages 1526–1534. Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. 2017a. Context gates for neural machine translation. Transactions of the Association for Computational Linguistics 5:87–99. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017b. Neural machine translation with reconstruction. In Proceedings of AAAI 2017. pages 3097–3103. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL 2016. pages 76–85. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Proceedings of NIPS 2015. Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, and Min Zhang. 2017. Neural machine translation advised by statistical machine translation. In Proceedings of AAAI 2017. pages 3330– 3336. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. In arXiv preprint arXiv:1609.08144. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. Natural Language Engineering 11(2):207–238. Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. In arXiv:1212.5701. 697
2017
64
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 698–707 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1065 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 698–707 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1065 Sequence-to-Dependency Neural Machine Translation Shuangzhi Wu†∗, Dongdong Zhang‡ , Nan Yang‡ , Mu Li‡ , Ming Zhou‡ †Harbin Institute of Technology, Harbin, China ‡Microsoft Research {v-shuawu, dozhang, nanya, muli, mingzhou}@microsoft.com Abstract Nowadays a typical Neural Machine Translation (NMT) model generates translations from left to right as a linear sequence, during which latent syntactic structures of the target sentences are not explicitly concerned. Inspired by the success of using syntactic knowledge of target language for improving statistical machine translation, in this paper we propose a novel Sequence-to-Dependency Neural Machine Translation (SD-NMT) method, in which the target word sequence and its corresponding dependency structure are jointly constructed and modeled, and this structure is used as context to facilitate word generations. Experimental results show that the proposed method significantly outperforms state-of-the-art baselines on Chinese-English and JapaneseEnglish translation tasks. 1 Introduction Recently, Neural Machine Translation (NMT) with the attention-based encoder-decoder framework (Bahdanau et al., 2015) has achieved significant improvements in translation quality of many language pairs (Bahdanau et al., 2015; Luong et al., 2015a; Tu et al., 2016; Wu et al., 2016). In a conventional NMT model, an encoder reads in source sentences of various lengths, and transforms them into sequences of intermediate hidden vector representations. After weighted by attention operations, combined hidden vectors are used by the decoder to generate translations. In most of cases, both encoder and decoder are implemented as recurrent neural networks (RNNs). ∗Contribution during internship at Microsoft Research. Many methods have been proposed to further improve the sequence-to-sequence NMT model since it was first proposed by Sutskever et al. (2014) and Bahdanau et al. (2015). Previous work ranges from addressing the problem of out-ofvocabulary words (Jean et al., 2015), designing attention mechanism (Luong et al., 2015a), to more efficient parameter learning (Shen et al., 2016), using source-side syntactic trees for better encoding (Eriguchi et al., 2016) and so on. All these NMT models employ a sequential recurrent neural network for target generations. Although in theory RNN is able to remember sufficiently long history, we still observe substantial incorrect translations which violate long-distance syntactic constraints. This suggests that it is still very challenging for a linear RNN to learn models that effectively capture many subtle long-range word dependencies. For example, Figure 1 shows an incorrect translation related to the long-distance dependency. The translation fragment in italic is locally fluent around the word is, but from a global view the translation is ungrammatical. Actually, this part of translation should be mostly affected by the distant plural noun foreigners rather than words Venezuelan government nearby. Fortunately, such long-distance word correspondence can be well addressed and modeled by syntactic dependency trees. In Figure 1, the head word foreigners in the partial dependency tree (top dashed box) can provide correct structural context for the next target word, with this information it is more likely to generate the correct word will rather than is. This structure has been successfully applied to significantly improve the performance of statistical machine translation (Shen et al., 2008). On the NMT side, introducing target syntactic structures could help solve the problem of ungrammatical output because it can bring two advantages over state-of-the-art NMT models: 698 a) syntactic trees can be used to model the grammatical validity of translation candidates; b) partial syntactic structures can be used as additional context to facilitate future target word prediction. Source : 他还说, 来委外国人若攻击委内瑞拉政府 会面临严重后果, 将被驱逐出境. partial tree decoder Ref : He added that foreign visitors to Venezuela who criticize the Venezuelan government will face serious consequences and will be deported . NMT : He also said that foreigners to Venezuela who attack the Venezuelan government is facing serious consequences, will be deported . … foreigners to Venezuela who attack the Venezuelan government attack the Venezuelan government … is ungrammatical structure Figure 1: Dependency trees help the prediction of the next target word. “NMT” refers to the translation result from a conventional NMT model, which fails to capture the long distance word relation denoted by the dashed arrow. However, it is not trivial to build and leverage syntactic structures on the target side in current NMT framework. Several practical challenges arise: (1) How to model syntactic structures such as dependency parse trees with recurrent neural network; (2) How to efficiently perform both target word generation and syntactic structure construction tasks simultaneously in a single neural network; (3) How to effectively leverage target syntactic context to help target word generation. To address these issues, we propose and empirically evaluate a novel Sequence-to-Dependency Neural Machine Translation (SD-NMT) model in our paper. An SD-NMT model encodes source inputs with bi-directional RNNs and associates them with target word prediction via attention mechanism as in most NMT models, but it comes with a new decoder which is able to jointly generate target translations and construct their syntactic dependency trees. The key difference from conventional NMT decoders is that we use two RNNs, one for translation generation and the other for dependency parse tree construction, in which incremental parsing is performed with the arc-standard shift-reduce algorithm proposed by Nivre (2004). We will describe in detail how these two RNNs work interactively in Section 3. We evaluate our method on publicly available data sets with Chinese-English and JapaneseEnglish translation tasks. Experimental results show that our model significantly improves translation accuracy over the conventional NMT and SMT baseline systems. 2 Background 2.1 Neural Machine Translation As a new paradigm to machine translation, NMT is an end-to-end framework (Sutskever et al., 2014; Bahdanau et al., 2015) which directly models the conditional probability P(Y |X) of target translation Y = y1,y2,...,yn given source sentence X = x1,x2,...,xm. An NMT model consists of two parts: an encoder and a decoder. Both of them utilize recurrent neural networks which can be a Gated Recurrent Unit (GRU) (Cho et al., 2014) or a Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) in practice. The encoder bidirectionally encodes a source sentence into a sequence of hidden vectors H = h1,h2,...,hm with a forward RNN and a backward RNN. Then the decoder predicts target words one by one with probability P(Y |X) = n Y j=1 P(yj|y<j, H) (1) Typically, for the jth target word, the probability P(yj|y<j, H) is computed as P(yj|y<j, H) = g(sj, yj−1, cj) (2) where g is a nonlinear function that outputs the probability of yj, and sj is the RNN hidden state. The context cj is calculated at each timestamp j based on H by the attention network cj = m X k=1 ajkhk (3) ajk = exp(ejk) Pm i=1 exp(eji) (4) ejk = vT a tanh(Wasj−1 + Uahk) (5) where va, Wa, Ua are the weight matrices. The attention mechanism is effective to model the correspondences between source and target. 699 2.2 Dependency Tree Construction We use a shift-reduce transition-based dependency parser to build the syntactic structure for the target language in our work. Specially, we adopt the arcstandard algorithm (Nivre, 2004) to perform incremental parsing during the translation process. In this algorithm, a stack and a buffer are maintained to store the parsing state over which three kinds of transition actions are applied. Let w0 and w1 be two topmost words in the stack, and ¯w be the current new word in a sequence of input, three transition actions are described as below. • Shift(SH) : Push ¯w to the stack. • Left-Reduce(LR(d)) : Link w0 and w1 with dependency label d as w0 d−→w1, and reduce them to the head w0. • Right-Reduce(RR(d)) : Link w0 and w1 with dependency label d as w0 d←−w1, and reduce them to the head w1. During parsing, an specific structure is used to record the dependency relationship between different words of input sentence. The parsing finishes when the stack is empty and all input words are consumed. As each word must be pushed to the stack once and popped off once, the number of actions needed to parse a sentence is always 2n, where n is the length of the sentence (Nivre, 2004). Because each valid transition action sequence corresponds to a unique dependency tree, a dependency tree can also be equivalently represented by a sequence of transition actions. 3 Sequence-to-Dependency Neural Machine Translation An SD-NMT model is an extension to the conventional NMT model augmented with syntactic structural information of target translation. Given a source sentence X = x1,x2,..,xm, its target translation Y = y1,y2,..,yn and Y ’s dependency parse tree T, the goal of the extension is to enable us to compute the joint probability P(Y, T|X). As in most structural learning tasks, the full prediction of Y and T is further decomposed into a chain of smaller predictions. For translation Y , it is generated in the left-to-right order as y1, y2, .., yn following the way in a normal sequence-to-sequence model. For Y ’s parse tree T, instead of directly modeling the tree itself, we predict a parsing action sequence A which can map Y to T. Thus at top level our SD-NMT model can be formulated as P(Y, T|X) = P(Y, A|X) = P(y1y2..yn, a1, a2..al|X)(6) where A = a1,a2,..,aj,..,al 1 with length l (l = 2n), aj ∈{SH, RR(d), LR(d)}2. Two recurrent neural networks, Word-RNN and Action-RNN, are used to model generation processes of translation sequence Y and parsing action sequence A respectively. Figure 2 shows an example how translation Y and its parsing actions are predicted step by step. <s> Word RNN Action RNN 𝑎𝑟𝑒 𝑦𝑜𝑢 SH 𝑤ℎ𝑜 𝑎𝑟𝑒 <s> SH SH LR LR RR SH SH 𝑤ℎ𝑜 𝑎𝑟𝑒 𝑦𝑜𝑢 SH 𝑤ℎ𝑜 … … Figure 2: Decoding example of our SD-NMT model for target sentence “who are you” with transition action sequence “SH SH LR SH RR”. The ending symbol EOS is omitted. Because the lengths of Word-RNN and ActionRNN are different, they are designed to work in a mutually dependent way: a target word is only allowed to be generated when the SH action is predicted in the action sequence. In this way, we can perform incremental dependency parsing for translation Y and at the same time track the partial parsing status through the translation generation process. For notational clarity, we introduce a virtual translation sequence ˆY =ˆy1,ˆy2,..,ˆyj,..,ˆyl for WordRNN which has the same length l with transition action sequence. ˆyj is defined as ˆyj = ( yvj δ(SH, aj) = 1 yvj−1 δ(SH, aj) = 0 where δ(SH, aj) is 1 when aj = SH, otherwise 0. vj is the index of Y , computed by vj = Pj i=1 δ(SH, ai). Apparently the mapping from ˆY 1In the rest of this paper, aj represents the transition action, rather than the attention weight in Equation 4. 2RR(d) refers to a set of RR actions augmented with dependency labels so as to LR(d). 700 a … 𝐸𝑤0 𝑏0𝑙 𝐾𝑗 𝐸𝑤1 𝐸𝑤0 𝐸𝑤0𝑙 𝑏1𝑟 𝐸𝑤1 𝐸𝑤1𝑟 𝑤0 𝑤1 … stack parsing configuation … 𝑤1𝑙… 𝑤1𝑟… 𝑤0𝑙… 𝑤0𝑟 𝑢𝑛𝑖𝑔𝑟𝑎𝑚 𝑏𝑖𝑔𝑟𝑎𝑚 construction of 𝐾𝑗 Attention partial tree 𝑦1 𝑦2 𝑦3 𝑦4 … 𝑦𝑗−1 𝑇𝑖𝑚𝑒𝑠𝑡𝑎𝑚𝑝 1 2 3 4 𝑗−1 𝑎𝑗−1 𝑎𝑗 𝑎1 𝑎2 𝑎3 𝑎𝑗−2 ⊕ δ 𝑎1 𝑎2 𝑎3 𝑎4 … 𝑎𝑗−1 𝑦0 𝑎0 𝑥1 𝑥2 𝑥3 … 𝑥𝑚 0 1 Word RNN Action RNN Encoder 𝑦1 𝑦2 𝑦3 … 𝑦𝑣𝑗−1 … … … 𝑦1 𝑦3 𝑦𝑗−2 𝑦𝑗−1 𝑦2 𝑌 𝑌 𝐾𝑗 𝑦𝑗 𝑦𝑣𝑗 𝒋 (𝑎) (𝑏) Figure 3: (a) is the overview of SD-NMT model. The dashed arrows mean copying previous recurrent state or word. The two RNNs use the same source context for prediction. aj ∈{SH, RR(d), LR(d)}. The bidirection arrow refers to the interaction between two RNNs. (b) shows the construction of syntactic context. The gray box means the concatenation of vectors to Y is deterministic, and Y can be easily derived given ˆY and A. With the notation of ˆY , the sequence probability of Y and A can be written as P(A|X, ˆY<l) = lY j=1 P(aj|a<j, X, ˆY<j) (7) P( ˆY |X, A≤l) = lY j=1 P(ˆyj|ˆy<j, X, A≤j)δ(SH,aj) (8) where ˆY<j refers to the subsequence ˆy1, ˆy2, .., ˆyj−1, and A≤j to a1, a2, .., aj. Based on Equation 7 and 8, the overall joint model can be computed as P(Y, T|X) = P(A|X, ˆY<l) × P( ˆY |X, A≤l) (9) As we have two RNNs in our model, the termination condition is also different from a conventional NMT model. In decoding, we maintain a stack to track the parsing configuration, and our model terminates once the Word-RNN predicts a special ending symbol EOS and all the words in the stack have been reduced. Figure 3 (a) gives an overview of our SD-NMT model. Due to space limitation, the detailed interconnections between two RNNs are only illustrated at timestamp j. The encoder of our model follows standard bidirectional RNN configuration. At timestamp j during decoding, our model first predicts an action aj by Action-RNN, then WordRNN checks the condition gate δ according to aj. If aj = SH, the Word-RNN will generate a new state (solid arrow) and predict a new target word yvj, otherwise it just copies previous state (dashed arrow) to the current state. For example, at timestamp 3, a3 ̸= SH, the state of Word-RNN is copied from its previous one. Meanwhile, ˆy3 = y2 is used as the immediate proceeding word in translation history. When computing attention scores, we extend Equation 5 by replacing the decoder hidden state with the concatenation of Word-RNN hidden state s and Action-RNN hidden state s′ (gray boxes in Figure 3). The new attention score is then updated as ejk = vT a tanh(Wa[sj−1; s′ j−1] + Uahk) (10) 3.1 Syntactic Context for Target Word Prediction Syntax has been proven useful for sentence generation task (Dyer et al., 2016). We propose to leverage target syntax to help translation generation. In our model, the syntactic context Kj at timestamp j is defined as a vector which is computed by a feed-forward network based on current 701 parsing configuration of Action-RNN. Denote that w0 and w1 are two topmost words in the stack, w0l and w1l are their leftmost modifiers in the partial tree, w0r and w1r their rightmost modifiers respectively. We define two unigram features and four bigram features. The unigram features are w0 and w1 which are represented by the word embedding vectors. The bigram features are w0w0l, w0w0r, w1w1l and w1w1r. Each of them is computed by bhc = tanh(WbEwh + UbEwhc), h ∈{0, 1}, c ∈{l, r}. These kinds of feature template have beeb proven effective in dependency parsing task (Zhang and Clark, 2008). Based on these features, the syntactic context vector Kj is computed as Kj = tanh(Wk[Ew0; Ew1] + Uk[b0l; b0r; b1l; b1r]) (11) where Wk, Uk, Wb, Ub are the weight matrices, E stands for the embedding matrix. Figure 2 (b) gives an overview of the construction of Kj. Note that zero vector is used for padding the words which are not available in the partial tree, so that all the K vectors have the same input size in computation. Adding Kj to Equation 2, the probabilities of transition action and word in Equation 7 and 8 are then updated as P(aj|a<j, X, ˆY<j) = g(s′ j, aj−1, cj, Kj) (12) P(ˆyj|ˆy<j, X, A≤j) = g(sj, ˆyj−1, cj, Kj) (13) After each prediction step in Word-RNN and Action-RNN, the syntax context vector K will be updated accordingly. Note that K is not used to calculate the recurrent states s in this work. 3.2 Model Training and Decoding For SD-NMT model, we use the sum of loglikelihoods of word sequence and action sequence as objective function for training algorithm, so that the joint probability of target translations and their parsing trees can be maximized: J(θ) = X (X,Y,A)∈D log P(A|X, ˆY<l)+ log P( ˆY |X, A≤l) (14) We also use mini-batch for model training. As the target dependency trees are known in the bilingual corpus during training, we pre-compute the partial tree state and syntactic context at each time stamp for each training instance. Thus it is easy for the model to process multiple trees in one batch. In the decoding process of an SD-NMT model, the score of each search path is the sum of log probabilities of target word sequence and transition action sequence normalized by the sequence length: score = 1 l l X j=1 log P(aj|a<j, X, ˆY<j)+ 1 n l X j=1 δ(SH, aj) log P(ˆyj|ˆy<j, X, A≤j) (15) where n is word sequence length and l is action sequence length. 4 Experiments The experiments are conducted on the ChineseEnglish task as well as the Japanese-English translation tasks where the same data set from WAT 2016 ASPEC corpus (Nakazawa et al., 2016) 3 is used for a fair comparison with other work. In addition to evaluate translation performance, we also investigate the quality of dependency parsing as a by-product and the effect of parsing quality against translation quality. 4.1 Setup In the Chinese-English task, the bilingual training data consists of a set of LDC datasets, 4 which has around 2M sentence pairs. We use NIST2003 as the development set, and the testsets contain NIST2005, NIST2006, NIST2008 and NIST2012. All English words are lowercased. In the Japanese-English task, we use top 1M sentence pairs from ASPEC Japanese-English corpus. The development data contains 1,790 sentences, and the test data contains 1,812 sentences with single reference per source sentence. To train SD-NMT model, the target dependency tree references are needed. As there is no golden annotation of parse trees over the target training data, we use pseudo parsing results as the target dependency references, which are got from an in-house developed arc-eager dependency parser based on work in (Zhang and Nivre, 2011). 3http://orchid.kuee.kyoto-u.ac.jp/ASPEC/ 4LDC2003E14, LDC2005T10, LDC2005E83, LDC2006E26, LDC2006E34, LDC2006E85, LDC2006E92, LDC2003E07, LDC2002E18, LDC2005T06, LDC2003E07, LDC2004T07, LDC2004T08, LDC2005T06 702 Settings NIST 2005 NIST 2006 NIST 2008 NIST 2012 Average HPSMT 35.34 33.56 26.06 27.47 30.61 RNNsearch 38.07 38.95 31.61 28.95 34.39 SD-NMT\K 38.83 39.23 31.92 29.72 34.93 SD-NMT 39.38 41.81 33.06 31.43 36.42 Table 1: Evaluation results on Chinese-English translation task with BLEU% metric. The “Average” column is the averaged result of all test sets. The numbers in bold indicate statistically significant difference (p < 0.05) from baselines. In the neural network training, the vocabulary size is limited to 30K high frequent words for both source and target languages. All low frequent words are normalized into a special token unk and post-processed by following the work in (Luong et al., 2015b). The size of word embedding and transition action embedding is set to 512. The dimensions of the hidden states for all RNNs are set to 1024. All model parameters are initialized randomly with Gaussian distribution (Glorot and Bengio, 2010) and trained on a NVIDIA Tesla K40 GPU. The stochastic gradient descent (SGD) algorithm is used to tune parameters with a learning rate of 1.0. The batch size is set to 96. In the update procedure, Adadelta (Zeiler, 2012) algorithm is used to automatically adapt the learning rate. The beam sizes for both word prediction and transition action prediction are set to 12 in decoding. The baselines in our experiments are a phrasal system and a neural translation system, denoted by HPSMT and RNNsearch respectively. HPSMT is an in-house implementation of the hierarchical phrase-based model (Chiang, 2005), where a 4gram language model is trained using the modified Kneser-Ney smoothing (Kneser and Ney, 1995) algorism over the English Gigaword corpus (LDC2009T13) plus the target data from the bilingual corpus. RNNsearch is an in-house implementation of the attention-based neural machine translation model (Bahdanau et al., 2015) using the same parameter settings as our SD-NMT model including word embedding size, hidden vector dimension, beam size, as well as the same mechanism for OOV word processing. The evaluation results are reported with the case-insensitive IBM BLEU-4 (Papineni et al., 2002). A statistical significance test is performed using the bootstrap resampling method proposed by Koehn (2004) with a 95% confidence level. For Japanese-English task, we use the official evaluation procedure provided by WAT 2016.5, where both BLEU and RIBES (Isozaki et al., 2010) are used for evaluation. 4.2 Evaluation on Chinese-English Translation We evaluate our method on the Chinese-English translation task. The evaluation results over all NIST test sets against baselines are listed in Table 1. Generally, RNNsearch outperforms HPSMT by 3.78 BLEU points on average while SD-NMT surpasses RNNsearch 2.03 BLUE point gains on average, which shows that NMT models usually achieve better results than SMT models, and our proposed sequence-to-dependency NMT model performs much better than traditional sequence-tosequence NMT model. We also investigate the effect of syntactic knowledge context by excluding its computation in Equation 12 and 13. The alternative model is denoted by SD-NMT\K. According to Table 1, SD-NMT\K outperforms RNNsearch by 0.54 BLEU points but degrades SD-NMT by 1.49 BLEU points on average, which demonstrates that the long distance dependencies captured by the target syntactic knowledge context, such as leftmost/rightmost children together with their dependency relationships, really bring strong positive effects on the prediction of target words. In addition to translation quality, we compare the perplexity (PPL) changes on the development set in terms of numbers of training mini-batches for RNNsearch and SD-NMT in Figure 4. We can see that the PPL of SD-NMT is initially higher than that of RNNsearch, but decreased to be lower over time. This is mainly because the quality of parse tree is too poor at the beginning which degrades translation quality and leads to higher PPL. After some training iterations, the SD-NMT 5http://lotus.kuee.kyoto-u.ac.jp/WAT/evaluation/index .html 703 BLEU RIBES System Description SMT Hiero 18.72 0.6511 Moses’ Hierarchical Phrase-based SMT SMT Phrase 18.45 0.6451 Moses’ Phrase-based SMT SMT S2T 20.36 0.6782 Moses’ String-to-Tree Syntax-based SMT Cromieres (2016)(Single model) 22.86 Single-layer NMT model without ensemble Cromieres (2016)(Self-ensemble) 24.71 0.7508 Self-ensemble of 2-layer NMT model Cromieres (2016)(4-Ensemble) 26.22 0.7566 Ensemble of 4 single-layer NMT models RNNsearch 23.50 0.7459 Single-layer NMT model SD-NMT 25.93 0.7540 Single-layer SD-NMT model Table 2: Evaluation results on Japanese-English translation task. model learns reasonable inferences of parse trees which begins to help target word generation and leads to lower PPL. iter RNNsearch SD-NMT 1 39.39 46.57 2 37.78 42.5 3 33.73 37.43 4 27.4 29.21 5 27.5 26.67 6 25.09 24.22 7 24.99 23.7 8 24.1 23.5 9 23.94 24.66 10 25.92 23.19 11 24.41 23.35 12 25.67 20.38 13 24.28 21 14 23.14 18.49 15 23.73 19.57 16 20.51 17.58 17 19.58 16.43 18 20.98 17.13 19 18.43 17 20 19.25 17.31 21 18.87 16.75 22 20.18 17.57 23 19.27 16.6 24 17.8 15.2 25 17.26 15.74 26 18.76 16.58 27 17.62 15.88 14 16.5 19 21.5 24 26.5 29 31.5 34 36.5 39 41.5 44 46.5 49 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 PPL Mini-batches(×2000) RNNsearch SD-NMT Figure 4: Perplexity (PPL) changes in terms of numbers of training mini-batches. In our experiments, the time cost of SD-NMT is two times of that for RNNsearch due to a more complicated model structure. But we think it is a worthy trade to pursue high quality translations. 4.3 Evaluation on Japanese-English Translation In this section, we report results on the JapaneseEnglish translation task. To ensure fair comparisons, we use the same training data and follow the pre-processing steps recommended in WAT 20166. Table 2 shows the comparison results from 8 systems with the evaluation metrics of BLEU and RIBES. The results in the first 3 rows are produced by SMT systems taken from the official WAT 2016. The remaining results are produced by NMT systems, among which the bottom two row results are taken from our in-house NMT systems and others refer to the work in (Cromieres, 2016; 6http://lotus.kuee.kyoto-u.ac.jp/WAT/baseline/data PreparationJE.html Cromieres et al., 2016) that are the competitive NMT results on WAT 2016. According to Table 2, NMT results still outperform SMT results similar to our Chinese-English evaluation results. The SD-NMT model significantly outperforms most other NMT models, which shows that our proposed approach to modeling target dependency tree benefit NMT systems since our RNNsearch baseline achieves comparable performance with the single layer attention-based NMT system in (Cromieres, 2016). Note that our SD-NMT gets comparable results with the 4 single-layer ensemble model in (Cromieres, 2016; Cromieres et al., 2016). We believe SD-NMT can get more improvements with an ensemble of multiple models in future experiments. 4.4 Effect of the Parsing Accuracy upon Translation Quality The interaction effect between dependency tree conduction and target word generation is investigated in this section. The experiments are conducted on the Chinese-English task over multiple test sets. We evaluate how the quality of dependency trees affect the performance of translation. In the decoding phase of SD-NMT, beam search is applied to the generations of both transition and actions as illustrated in Equation 15. Intuitively, the larger the beam size of action prediction is, the better the dependency tree quality is. We fix the beam size for generating target words to 12, and change the beam size for action prediction to see the difference. Figure 5 shows the evaluation results of all test sets. There is a tendency for BLEU scores to increase with the growth of action prediction beam size. The reason is that the translation quality increases as the quality of dependency tree improves, which shows the construction of dependency trees can boost the generation of target 704 beamsize NIST2005 NIST2006 NIST2008 NIST2012 2 37.56 39.3 30.69 29.41 4 38.77 40.64 32.06 30.63 6 38.93 41.32 32.63 31.07 8 39.34 41.52 32.88 31.32 10 39.32 41.65 32.82 31.41 12 39.38 41.81 33.06 31.43 37.56 38.77 38.93 39.34 39.32 39.38 37 37.5 38 38.5 39 39.5 2 4 6 8 10 12 BLEU(%) Beam size of action prediction NIST2005 39.3 40.64 41.32 41.52 41.65 41.81 39 39.5 40 40.5 41 41.5 42 2 4 6 8 10 12 BLEU(%) Beam size of action prediction NIST2006 30.69 32.06 32.63 32.88 32.82 33.06 30 30.5 31 31.5 32 32.5 33 33.5 2 4 6 8 10 12 BLEU(%) Beam size of action prediction NIST2008 29.41 30.63 31.07 31.32 31.41 31.43 29 29.5 30 30.5 31 31.5 2 4 6 8 10 12 BLEU(%) Beam size of action prediction NIST2012 Figure 5: Translation performance against the beam size of action prediction. words, and vice versa we believe. 4.5 Quality Estimation of Dependency Tree Construction As a by-product, the quality of dependency trees not only affects the performance of target word generation, but also influences the possible downstream processors or tasks such as text analyses. The direct evaluation of tree quality is not feasible due to the unavailable golden references. So we resort to estimating the consistency between the by-products and the parsing results of our standalone dependency parser with state-of-the-art performance. The higher the consistency is, the closer the performance of by-product is to the standalone parser. To reduce the influence of ill-formed data as much as possible, we build the evaluation data set by heuristically selecting 360 SD-NMT translation results together with their dependency trees from NIST test sets where both source- and target-side do not contain unk and have a length of 20-30. We then take the parsing results of the stand-alone parser for these translations as references to indirectly estimate the quality of byproducts. We get a UAS (unlabeled attachment score) of 94.96% and a LAS (labeled attachment score) of 93.92%, which demonstrates that the dependency trees produced by SD-NMT are much similar with the parsing results from the standalone parser. 4.6 Translation Example In this section, we give a case study to explain how our method works. Figure 6 shows a translation example from the NIST testsets. SMT and RNNsearch refer to the translation results from the baselines HPSMT and NMT. For our SD-NMT model, we list both the generated translation and its corresponding dependency tree. We find that the translation of SMT is disfluent and ungrammatical, whereas RNNsearch is better than SMT. Although the translation of RNNsearch is locally fluent around word “have” in the rectangle, both its grammar is incorrect and its meaning is inaccurate from a global view. The word “have” should be in a singular form as its subject is “safety” rather than “workers”. For our SD-NMT model, we can see that the translation is much better than baselines and the dependency tree is reasonable. The reason is that after generating the word “workers”, the previous subtree in the gray region is transformed to the syntactic context which can guide the generation of the next word as illustrated by the dashed arrow. Thus our model is more likely to generate the correct verb “is” with singular form. In addition, the global structure helps the model correctly identify the inverted sentence pattern of the former translated part and make better choices for the future translation (“only when .. can ..” in our translation, “only when .. will ..” in the reference), which remains a challenge for conventional NMT model. 5 Related Work Incorporating linguistic knowledge into machine translation has been extensively studied in Statistic Machine Translation (SMT) (Galley et al., 2006; Shen et al., 2008; Liu et al., 2006). Liu et al. (2006) proposed a tree-to-string alignment template for SMT to leverage source side syntactic information. Shen et al. (2008) proposed a target dependency language model for SMT to employ target-side structured information. These methods show promising improvement for SMT. Recently, neural machine translation (NMT) has achieved better performance than SMT in many language pairs (Luong et al., 2015a; Zhang et al., 2016; Shen et al., 2016; Wu et al., 2016; Neubig, 2016). In a vanilla NMT model, source and target sentences are treated as sequences where the syntactic knowledge of both sides is neglected. Some effort has been done to incorporate source syntax into NMT. Eriguchi et al. (2016) proposed a tree-to-sequence attentional NMT model where source-side parse tree was used and achieved promising improvement. Intuitively, adding source syntactic information to 705 [Source] 只有施工人员的安全得到了保证, 才能继续施工. [Reference] only when the safety of the workers is guaranteed will they continue with the project . [HPSMT] only safety is assured of construction personnel , to continue construction . [RNNsearch] only when the safety of construction workers have been guaranteed to continue construction . [SD-NMT] only when the safety of the workers is guaranteed can we continue to work . nsubjpass nsubj auxpass punct advmod pobj aux prep xcomp the of workers safety is continue guaranteed can work we . only when the to dep aux det det ccomp Figure 6: Translation examples of SMT, RNNsearch and our SD-NMT on Chinese-English translation task. The italic words on the arrows are dependency labels. The ending symbol EOS is omitted. RNNsearch fails to capture the long dependency which leads to an ungrammatical result. Whereas with the help of the syntactic tree, our SD-NMT can get a much better translation. NMT is straightforward, because the source sentence is definitive and easy to attach extra information. However, it is non-trivial to add target syntax as target words are uncertain in decoding process. Up to now, there is few work that attempts to build and leverage target syntactic information for NMT. There has been work that incorporates syntactic information into NLP tasks with neural networks. Dyer et al. (2016) presented a RNN grammar for parsing and language modeling. They replaced SH with a set of generative actions to generate words under a Stack LSTM framework (Dyer et al., 2015), which achieves promising results for language modeling on the Penn Treebank data. In our work, we propose to involve target syntactic trees into NMT model to jointly learn target translation and dependency parsing where target syntactic context over the parse tree is used to improve the translation quality. 6 Conclusion and Future Work In this paper, we propose a novel string-todependency translation model over NMT. Our model jointly performs target word generation and arc-standard dependency parsing. Experimental results show that our method can boost the two procedures and achieve significant improvements on the translation quality of NMT systems. In future work, along this research direction, we will try to integrate other prior knowledge, such as semantic information, into NMT systems. In addition, we will apply our method to other sequenceto-sequence tasks, such as text summarization, to verify the effectiveness. Acknowledgments We are grateful to the anonymous reviewers for their insightful comments. We also thank Shujie Liu and Zhirui Zhang for the helpful discussions. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR 2015 . David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL 2005. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of ENMLP 2014. Fabien Cromieres. 2016. Kyoto-nmt: a neural machine translation implementation in chainer. In Proceedings of COLING 2016. Fabien Cromieres, Chenhui Chu, Toshiaki Nakazawa, and Sadao Kurohashi. 2016. Kyoto university participation to wat 2016. In Proceedings of the 3rd Workshop on Asian Translation (WAT2016). pages 166–174. 706 Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of ACL 2015. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the NAACL 2016. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proceedings of ACL 2016. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of ACL 2006. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Aistats. volume 9, pages 249–256. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8). Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic evaluation of translation quality for distant language pairs. In Proceedings of EMNLP. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of ACL 2015. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on. IEEE, volume 1, pages 181–184. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In EMNLP. Citeseer, pages 388–395. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proceedings of ACL 2006. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP 2015. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Proceedings of ACL 2015. Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchimoto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. 2016. Aspec: Asian scientific paper excerpt corpus. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association (ELRA), Portoroz, Slovenia, pages 2204–2208. Graham Neubig. 2016. Lexicons and minimum risk training for neural machine translation: NAISTCMU at WAT2016. In Proceedings of the 3nd Workshop on Asian Translation (WAT2016). Osaka, Japan. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002. Libin Shen, Jinxi Xu, and Ralph M Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In ACL. pages 577–585. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of ACL 2016. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL 2016. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 . Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . Biao Zhang, Deyi Xiong, jinsong su, Hong Duan, and Min Zhang. 2016. Variational neural machine translation. In Proceedings of EMNLP 2016. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graphbased and transition-based dependency parsing. In EMNLP2008. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of ACL 2011. 707
2017
65
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 708–717 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1066 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 708–717 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1066 Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning Jing Ma1, Wei Gao2, Kam-Fai Wong1,3 1The Chinese University of Hong Kong, Hong Kong SAR 2Qatar Computing Research Institute, Doha, Qatar 3MoE Key Laboratory of High Confidence Software Technologies, China 1{majing,kfwong}@se.cuhk.edu.hk, [email protected] Abstract How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models. 1 Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas. The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory1. The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line. We can see that after the initial post, the tweet 1https://www.nytimes.com/2016/11/20/ business/media/how-fake-news-spreads. html was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread. A widely accepted definition of rumor is “unverified and instrumentally relevant information statements in circulation” (DiFonzo and Bordia, 2007). This unverified information may eventually turn out to be true, or partly or entirely false. In today’s ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society. Therefore, it is crucial to track and debunk such rumors in timely manner. Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors. However, such endeavor is manual, thus prone to poor coverage and low speed. Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.). But such an approach was over simplified as they ignored the dynamics of rumor propagation. Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013, 2017) rather than the structure of propagation. So, can the propagation structure make any difference for differentiating rumors from nonrumors? Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014). However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017). Intuitively, for “successful” rumors being propagated as widely 708 Figure 1: An illustration of how the rumor about “buses used to ship in paid anti-Trump protesters to Austin, Texas” becomes viral, where ‘*’ indicates the level of influence. as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation. We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share. Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors. Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased. Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color). The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users’ stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets. Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did. In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general. Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2: Fragments of the propagation for two source tweets. Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps. et al., 2011; Ma et al., 2015, 2016) cannot be applied easily on such complex, dynamic structures. To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user’s interactions to one another triggered by the source tweet. Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees. Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog 709 posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts. The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors. We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission. Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin. Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not. Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016), here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem. 2 Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991). Castillo et al. (2011) studied information credibility on Twitter using a wide range of hand-crafted features. Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015). Zhao et al. (2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor. All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data. Some studies focus on finding temporal patterns for understanding rumor diffusion. Kown et al. (2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume. Ma et al. (2015) extended the model using time series to capture the variation of features over time. Friggeri et al. (2014) and Hannak et al. (2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites. More recently, Ma et al. (2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times. Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel. Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies. Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task. Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011). Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001), question-answering (Moschitti, 2006), semantic analysis (Moschitti, 2004), relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010). These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors. Our proposed method is a substantial extension of tree kernel for modeling such structures. 3 Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users. Once a user has posted a tweet, all his followers will receive the tweet. Furthermore, Twitter allows a user to retweet or comment another user’s post, so that the information could reach beyond the network of the original creator. We model the propagation of each source tweet as a tree structure T(r) = ⟨V, E⟩, where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V . If there exists a directed edge from vi to vj, it means vj is a direct response to vi. More specifically, each node v ∈V is represented as a tuple v = (uv, cv, tv), which provides 710 the following information: uv is the creator of the post, cv represents the text content of the post, and tv is the time lag between the source tweet r and v. In our case, uv contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., cv is a vector of binary features based on uni-grams and/or bi-grams representing the post’s content. 4 Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK). Our task is, given a propagation tree T(r) of a source tweet r, to predict the label of r. 4.1 Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on. Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees. Given a syntactic parse tree, each node with its children is associated with a grammar production rule. Figure 3 illustrates the syntactic parse tree of “cut a tree” and its subtrees. A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included. For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP →D N (Collins and Duffy, 2001). Following Collins and Duffy (2001), given two parse trees T1 and T2, the kernel function K(T1, T2) is defined as: X vi∈V1 X vj∈V2 ∆(vi, vj) (1) where V1 and V2 are the sets of all nodes respectively in T1 and T2, and each node is associated with a production rule, and ∆(vi, vj) evaluates the common subtrees rooted at vi and vj. ∆(vi, vj) can be computed using the following recursive procedure (Collins and Duffy, 2001): 1) if the production rules at vi and vj are different, then ∆(vi, vj) = 0; 2) else if the production rules at vi and vj are same, and vi and vj have only leaf children Figure 3: A syntactic parse tree and subtrees. (i.e., they are pre-terminal symbols), then ∆(vi, vj) = λ; 3) else ∆(vi, vj) = λ Qmin(nc(vi),nc(vj)) k=1 (1 + ∆(ch(vi, k), ch(vj, k))). where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤1) is a decay factor. λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size. 4.2 Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties. However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same. With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes vi and vj (we simplify the node representation for instance vi = (ui, ci, ti)) as the following: f(vi, vj) = e−t (αE(ui, uj) + (1 −α)J (ci, cj)) where t = |ti −tj| is the absolute difference between the time lags of vi and vj, E and J are 711 user-based similarity and content-based similarity, respectively, and α is the trade-off parameter. The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation. For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar. The user-based similarity is defined as an Euclidean distance E(ui, uj) = ||ui −uj||2, where ui and uj are the user vectors of node vi and vj and || • ||2 is the 2-norm of a vector. Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation. Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (ci, cj) = |Ngram(ci) ∩Ngram(cj)| |Ngram(ci) ∪Ngram(cj)| where ci and cj are the sets of content words in two nodes. For n-grams here, we adopt both uni-grams and bi-grams. It can capture cue terms e.g., ‘false’, ‘debunk’, ‘not true’, etc. commonly occurring in rumors but not in non-rumors. Given two propagation trees T1 = ⟨V1, E1⟩and T2 = ⟨V2, E2⟩, PTK aims to compute the similarity between T1 and T2 iteratively based on enumerating all pairs of most similar subtrees. First, for each node vi ∈V1, we obtain v′ i ∈V2, the most similar node of vi from V2: v′ i = arg max vj∈V2 f(vi, vj) Similarly, for each vj ∈V2, we obtain v′ j ∈V1: v′ j = arg max vi∈V1 f(vi, vj) Then, the propagation tree kernel KP (T1, T2) is defined as: X vi∈V1 Λ(vi, v′ i) + X vj∈V2 Λ(v′ j, vj) (2) where Λ(v, v′) evaluates the similarity of two subtrees rooted at v and v′, which is computed recursively as follows: 1) if v or v′ are leaf nodes, then Λ(v, v′) = f(v, v′); 2) else Λ(v, v′) = f(v, v′) Qmin(nc(v),nc(v′)) k=1 (1 + Λ(ch(v, k), ch(v′, k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈[0, 1] is used for softly counting similar subtrees instead of common subtrees. Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f. PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017). 4.3 Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree. Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens. Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007). For a propagation tree node v ∈T(r), let Lr v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤x < Lr v, v[0] = v, v[Lr v −1] = r). cPTK evaluates the similarity between two trees T1(r1) and T2(r2) as follows: X vi∈V1 Lr1 vi −1 X x=0 Λx(vi, v′ i) + X vj∈V2 Lr2 vj −1 X x=0 Λx(v′ j, vj) (3) where Λx(v, v′) measures the similarity of subtrees rooted at v[x] and v′[x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λx(v, v′) = f(v[x], v′[x]), where v[x] and v′[x] are the x-th ancestor nodes of v and v′ on the respective propagation path. 2) else Λx(v, v′) = Λ(v, v′), namely PTK. Clearly, PTK is a special case of cPTK when x = 0 (see equation 3). cPTK evaluates the oc712 currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases. 4.4 Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features. This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004). We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier. We treat each tree as an instance, and its similarity values with all training instances as feature space. Therefore, the kernel matrix of training set is m × m and that of test set is n×m where m and n are the sizes of training and test sets, respectively. For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor. We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015). 5 Experiments and Results 5.1 Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth. We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016). The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets. First, we extracted the popular source tweets2 that are highly retweeted or replied. We then collected all the propagation threads (i.e., retweets and replies) for these source tweets. Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Table 1: Statistics of the datasets Statistic Twitter15 Twitter16 # of users 276,663 173,487 # of source tweets 1,490 818 # of threads 331,612 204,820 # of non-rumors 374 205 # of false rumors 370 205 # of true rumors 372 205 # of unverified rumors 374 203 Avg. time length / tree 1,337 Hours 848 Hours Avg. # of posts / tree 223 251 Max # of posts / tree 1,768 2,765 Min # of posts / tree 55 81 Twrench3 and crawled the replies through Twitter’s web interface. Finally, we annotated the source tweets by referring to the labels of the events they are from. We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc). Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event’s label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events. We make the datasets produced publicly accessible4. Table 1 gives statistics on the resulting datasets. 5.2 Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015). DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015), which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features. DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based 3https://twren.ch 4https://www.dropbox.com/s/ 7ewzdrbelpmrnxu/rumdetect2017.zip?dl=0 713 model with RBF kernel (Yang et al., 2012), respectively, both using hand-crafted features based on the overall statistics of the posts. RFC: The Random Forest Classifier proposed by Kwon et al. (2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics. GRU: The RNN-based rumor detection model proposed by Ma et al. (2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time. BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM. Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTK- and cPTKare the setting of only using content while ignoring user properties. We implemented DTC and RFC with Weka5, SVM models with LibSVM6 and GRU with Theano7. We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation. We used accuracy, F1 measure as evaluation metrics. 5.3 Experimental Results Table 2 shows that our proposed methods outperform all the baselines on both datasets. Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information. This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., “what?”, “really?”, “not sure”, etc.). This also justifies the good performance of BOW even though it only uses uni-grams for representation. Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions. That is why the results of DTR are not satisfactory. SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits. But none of the models can directly incorporate structured propagation patterns for deep similarity compar5http://www.cs.waikato.ac.nz/ml/weka/ 6https://www.csie.ntu.edu.tw/˜cjlin/ libsvm/ 7http://deeplearning.net/software/ theano/ Table 2: Rumor detection results (NR: NonRumor; FR: False Rumor; TR: True Rumor; UR: Unverified Rumor) (a) Twitter15 Dataset Method NR FR TR UR Acc. F1 F1 F1 F1 DTR 0.409 0.501 0.311 0.364 0.473 SVM-RBF 0.318 0.455 0.037 0.218 0.225 DTC 0.454 0.733 0.355 0.317 0.415 SVM-TS 0.544 0.796 0.472 0.404 0.483 RFC 0.565 0.810 0.422 0.401 0.543 GRU 0.646 0.792 0.574 0.608 0.592 BOW 0.548 0.564 0.524 0.582 0.512 PTK0.657 0.734 0.624 0.673 0.612 cPTK0.697 0.760 0.645 0.696 0.689 PTK 0.710 0.825 0.685 0.688 0.647 cPTK 0.750 0.804 0.698 0.765 0.733 (b) Twitter16 Dataset Method NR FR TR UR Acc. F1 F1 F1 F1 DTR 0.414 0.394 0.273 0.630 0.344 SVM-RBF 0.321 0.423 0.085 0.419 0.037 DTC 0.465 0.643 0.393 0.419 0.403 SVM-TS 0.574 0.755 0.420 0.571 0.526 RFC 0.585 0.752 0.415 0.547 0.563 GRU 0.633 0.772 0.489 0.686 0.593 BOW 0.585 0.553 0.556 0.655 0.578 PTK0.653 0.673 0.640 0.722 0.567 cPTK0.702 0.711 0.664 0.816 0.608 PTK 0.722 0.784 0.690 0.786 0.644 cPTK 0.732 0.740 0.709 0.836 0.686 ison between propagation trees. SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours. So, they performed obviously worse than our approach. Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data. In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals. Therefore, the superiority of our models is clear: PTK- which only uses text is already better than GRU, demonstrating the importance of propagation structures. PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective. It is also observed that cPTK outperforms PTK except for non-rumor class. This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non714 (a) Twitter15 Dataset (b) Twitter16 Dataset Figure 4: Results of rumor early detection Figure 5: The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful. This might be due to the generally weak signals originated from node properties on the paths during non-rumor’s diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors. This is not an issue in cPTK- since user information is not considered at all. Over all classes, cPTK achieves the highest accuracies on both datasets. Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors. This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem. So, they do not perform well for finer-grained classes. Our approach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure. 5.4 Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible. In early detection task, all the posts after a detection deadline are invisible during test. The earlier the deadline, the less propagation information can be available. Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection715 specific algorithm) against various deadlines. In the first few hours, our approach demonstrates superior early detection performance than other models. Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models. Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage. Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage. Many textual signals (underlined) can also be observed in that early period. Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering. 6 Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees. A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes. Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions. Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks. Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination. In the future, we will focus on improving the rumor detection task by exploring network representation learning framework. Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media. Acknowledgment This work is partly supported by General Research Fund of Hong Kong (14232816). We would like to thank anonymous reviewers for the insightful comments. References Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of WWW. Michael Collins and Nigel Duffy. 2001. Convolution kernels for natural language. In Advances in neural information processing systems. pages 625–632. Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proceedings of the 42nd annual meeting on association for computational linguistics. Association for Computational Linguistics, page 423. Nicholas DiFonzo and Prashant Bordia. 2007. Rumor, gossip and urban legends. Diogenes 54(1):19–35. Adrien Friggeri, Lada A Adamic, Dean Eckles, and Justin Cheng. 2014. Rumor cascades. In Proceedings of ICWSM. Aniko Hannak, Drew Margolin, Brian Keegan, and Ingmar Weber. 2014. Get back! you don’t know me like that: The social mediation of fact checking interventions in twitter conversations. In ICWSM. Sejeong Kwon, Meeyoung Cha, and Kyomin Jung. 2017. Rumor detection over varying time windows. PLOS ONE 12(1):e0168344. Sejeong Kwon, Meeyoung Cha, Kyomin Jung, Wei Chen, and Yajun Wang. 2013. Prominent features of rumor propagation in online social media. In Proceedings of ICDM. Xiaomo Liu, Armineh Nourbakhsh, Quanzhi Li, Rui Fang, and Sameena Shah. 2015. Real-time rumor debunking on twitter. In Proceedings of CIKM. Michal Lukasik, Trevor Cohn, and Kalina Bontcheva. 2015. Classifying tweet level judgements of rumours in social media. arXiv preprint arXiv:1506.00468 . Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of IJCAI. Jing Ma, Wei Gao, Zhongyu Wei, Yueming Lu, and Kam-Fai Wong. 2015. Detect rumors using time series of social context information on microblogging websites. In Proceedings of CIKM. Meredith Ringel Morris, Scott Counts, Asta Roseway, Aaron Hoff, and Julia Schwarz. 2012. Tweeting is believing?: understanding microblog credibility perceptions. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work. ACM, pages 441–450. 716 Alessandro Moschitti. 2004. A study on convolution kernels for shallow semantic parsing. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, page 335. Alessandro Moschitti. 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In European Conference on Machine Learning. Springer, pages 318–329. Daniel Preotiuc-Pietro, Vasileios Lampos, and Nikolaos Aletras. 2015. An analysis of the user occupational class through twitter content. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015). Association for Computational Linguistics, pages 1754–1764. Vahed Qazvinian, Emily Rosengren, Dragomir R Radev, and Qiaozhu Mei. 2011. Rumor has it: Identifying misinformation in microblogs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1589–1599. Ralph L Rosnow. 1991. Inside rumor: A personal journey. American Psychologist 46(5):484. Jun Sun, Min Zhang, and Chew Lim Tan. 2010. Exploring syntactic structural features for sub-tree alignment using bilingual tree kernels. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 306–315. Shengyun Sun, Hongyan Liu, Jun He, and Xiaoyong Du. 2013. Detecting event rumors on sina weibo automatically. In Web Technologies and Applications, Springer, pages 120–131. Cass R Sunstein. 2014. On rumors: How falsehoods spread, why we believe them, and what can be done. Princeton University Press. Ke Wu, Song Yang, and Kenny Q Zhu. 2015. False rumors detection on sina weibo by propagation structures. In 2015 IEEE 31st International Conference on Data Engineering (ICDE). IEEE, pages 651–662. Fan Yang, Yang Liu, Xiaohui Yu, and Min Yang. 2012. Automatic detection of rumor on sina weibo. In Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics. Min Zhang, GuoDong Zhou, and Aiti Aw. 2008. Exploring syntactic structured features over parse trees for relation extraction using kernel methods. Information processing & management 44(2):687–701. Zhe Zhao, Paul Resnick, and Qiaozhu Mei. 2015. Enquiring minds: Early detection of rumors in social media from enquiry posts. In Proceedings of WWW. GuoDong Zhou, Min Zhang, Dong Hong Ji, and QiaoMing Zhu. 2007. Tree kernel-based relation extraction with context-sensitive structured parse tree information. EMNLP-CoNLL 2007 page 728. Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie. 2016. Analysing how people orient to and spread rumours in social media by looking at conversational threads. PLOS ONE 11(3):e0150989. 717
2017
66
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 718–728 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1067 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 718–728 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1067 EmoNet: Fine-Grained Emotion Detection with Gated Recurrent Neural Networks Muhammad Abdul-Mageed School of Library, Archival & Information Studies University of British Columbia [email protected] Lyle Ungar Computer and Information Science University of Pennsylvania [email protected] Abstract Accurate detection of emotion from natural language has applications ranging from building emotional chatbots to better understanding individuals and their lives. However, progress on emotion detection has been hampered by the absence of large labeled datasets. In this work, we build a very large dataset for fine-grained emotions and develop deep learning models on it. We achieve a new state-of-the-art on 24 fine-grained types of emotions (with an average accuracy of 87.58%). We also extend the task beyond emotion types to model Robert Plutchik’s 8 primary emotion dimensions, acquiring a superior accuracy of 95.68%. 1 Introduction According to the Oxford English Dictionary, emotion is defined as “[a] strong feeling deriving from one’s circumstances, mood, or relationships with others.” 1 This “standard” definition identifies emotions as constructs involving something innate that is often invoked in social interactions and that aids in communicating with others(Hwang and Matsumoto, 2016). It is no exaggeration that humans are emotional beings: Emotions are an integral part of human life, and affect our decision making as well as our mental and physical health. As such, developing emotion detection models is important; they have a wide array of applications, ranging from building nuanced virtual assistants that cater for the emotions of their users to detecting the emotions of social media users in order to understand their mental and/or physical health. 1https://en.oxforddictionaries.com/ definition/emotion. However, emotion detection has remained a challenging task, partly due to the limited availability of labeled data and partly due the controversial nature of what emotions themselves are (Aaron C. Weidman and Tracy, 2017). Recent advances in machine learning for natural language processing (NLP) suggest that, given enough labeled data, there should be an opportunity to build better emotion detection models. Manual labeling of data, however, is costly and so it is desirable to develop labeled emotion data without annotators. While the proliferation of social media has made it possible for us to acquire large datasets with implicit labels in the form of hashtags (Mohammad and Kiritchenko, 2015), such labels are noisy and reliable. In this work, we seek to enable deep learning by creating a large dataset of fine-grained emotions using Twitter data. More specifically, we harness cues in Twitter data in the form of emotion hashtags as a way to build a labeled emotion dataset that we then exploit using distant supervision (Mintz et al., 2009) (the use of hashtags as a surrogate for annotator-generated emotion labels) to build emotion models grounded in psychology. We construct such a dataset and exploit it using powerful deep learning methods to build accurate, high coverage models for emotion prediction. Overall, we make the following contributions: 1) Grounded in psychological theory of emotions, we build a large-scale, high quality dataset of tweets labeled with emotions. Key to this are methods to ensure data quality, 2) we validate the data collection method using human annotations, 3) we develop powerful deep learning models using a gated recurrent network to exploit the data, yielding new state-of-the-art on 24 fine-grained types of emotions, and 4) we extend the task beyond these emotion types to model Plutick’s 8 primary emotion dimensions. 718 Our emotion modeling relies on distant supervision (Read, 2005; Mintz et al., 2009), the approach of using cues in data (e.g., hashtags or emoticons) as a proxy for “ground truth” labels as we explained above. Distant supervision has been investigated by a number of researchers for emotion detection (Tanaka et al., 2005; Mohammad, 2012; Purver and Battersby, 2012; Wang et al., 2012; Pak and Paroubek, 2010; Yang et al., 2007) and for other semantic tasks such as sentiment analysis (Read, 2005; Go et al., 2009) and sarcasm detection (Gonz´alez-Ib´anez et al., 2011). In these works, authors successfully use emoticons and/or hashtags as marks to label data after performing varying degrees of data quality assurance. We take a similar approach, using a larger collection of tweets, richer emotion definitions, and stronger filtering for tweet quality. The remainder of the paper is organized as follows: We first overview related literature in Section 2, describe our data collection in Section 3.1, and the annotation study we performed to validate our distant supervision method in Section 4. We then describe our methods in Section 5, provide results in Section 6, and conclude in Section 8. 2 Related Work 2.1 Computational Treatment of Emotion The SemEval-2007 Affective Text task (Strapparava and Mihalcea, 2007) [SEM07] focused on classification of emotion and valence (i.e., positive and negative texts) in news headlines. A total of 1,250 headlines were manually labeled with the 6 basic emotions of Ekman (Ekman, 1972) and made available to participants. Similarly, (Aman and Szpakowicz, 2007) describe an emotion annotation task of identifying emotion category, emotion intensity and the words/phrases that indicate emotion in blog post data of 4,090 sentences and a system exploiting the data. Our work differs from both that of SEM07 (Strapparava and Mihalcea, 2007) and (Aman and Szpakowicz, 2007) in that we focus on a different genre (i.e., Twitter) and investigate distant supervision as a way to acquire a significantly larger labeled dataset. Our work is similar to (Mohammad, 2012; Mohammad and Kiritchenko, 2015), (Wang et al., 2012), and (Volkova and Bachrach, 2016) who use distant supervision to acquire Twitter data with emotion hashtags and report analyses and experiments to validate the utility of this approach. For example, (Mohammad, 2012) shows that by using a simple domain adaptation method to train a classifier on their data they are able to improve both precision and recall on the SemEval-2007 (Strapparava and Mihalcea, 2007) dataset. As the author points out, this is another premise that the selflabeled hashtags acquired from Twitter are consistent, to some degree, with the emotion labels given by the trained human judges who labeled the SemEval-2007 data. As pointed out earlier, (Wang et al., 2012) randomly sample a set of 400 tweets from their data and human-label as relevant/irrelevant, as a way to verify the distant supervision approach with the quality assurance heuristics they employ. The authors found that the precision on a test set is 93.16%, thus confirming the utility of the heuristics. (Wang et al., 2012) provide a number of important observations, as conclusions based on their work. These include that since they are provided by the tweets’ writers, the emotion hashtags are more natural and reliable than the emotion labels traditionally assigned by annotators to data by a few annotators. This is the case since in the lab-condition method annotators need to infer the writers emotions from text, which may not be accurate. Additionally, (Volkova and Bachrach, 2016) follow the same distant supervision approach and find correlations of users’ emotional tone and the perceived demographics of these users’ social networks exploiting the emotion hashtag-labeled data. Our dataset is more than an order of magnitude larger than (Mohammad, 2012) and (Volkova and Bachrach, 2016) and the range of emotions we target is much more fine grained than (Mohammad, 2012; Wang et al., 2012; Volkova and Bachrach, 2016) since we model 24 emotion types, rather than focus on ≤7 basic emotions. (Yan et al., 2016; Yan and Turtle, 2016a,b) develop a dataset of 15,553 tweets labeled with 28 emotion types and so target a fine-grained range as we do. The authors instruct human annotators under lab conditions to assign any emotion they feel is expressed in the data, allowing them to assign more than one emotion to a given tweet. A set of 28 chosen emotions was then decided upon and further annotations were performed using Amazon Mechanical Turk (AMT). The authors cite an agreement of 0.50 Krippendorff’s alpha (α) between the lab/expert annotators, and an (α) of 0.28 between experts and AMT workers. EmoTweet719 28 is a useful resource. However, the agreement between annotators is not high and the set of assigned labels do not adhere to a specific theory of emotion. We use a much larger dataset and report an accuracy of the hashtag approach at 90% based on human judgement as reported in Section 4. 2.2 Mood A number of studies have also been performed to analyze and/or model mood in social media data. (De Choudhury et al., 2012) identify more than 200 moods frequent on Twitter as extracted from psychological literature and filtered by AMT workers. They then collect tweets which have one of the moods in their mood lexicon in the form of a hashtag. To verify the quality of the mood data, the authors run AMT studies where they ask workers whether a tweet displayed the respective mood hashtag or not and find that in 83% of the cases hashtagged moods at the end of posts did capture users’ moods, whereas for posts with mood hashtags anywhere in the tweet, only 58% of the cases capture the mood of users. Although they did not build models for mood detection, the annotation studies (De Choudhury et al., 2012) perform further support our specific use of hashtags to label emotions. (Mishne and De Rijke, 2006) collect user-labeled mood from blog post text on LiveJournal and exploit them for predicting the intensity of moods over a time span rather than at the post level. Similarly, (Nguyen, 2010) builds models to infer patterns of moods in a large collection of LiveJournal posts. Some of the moods in these LiveJournal studies (e.g., hungry, cold), as (De Choudhury et al., 2012) explain, would not fit any psychological theory. Our work is different in that it is situated in psychological theory of emotion. 2.3 Deep Learning for NLP In spite of the effectiveness of feature engineering for NLP, it is a labor intensive task that also needs domain expertise. More importantly, feature engineering falls short of extracting and organizing all the discriminative information from data (LeCun et al., 2015; Goodfellow et al., 2016). Neural networks (Goodfellow et al., 2016) have emerged as a successful class of methods that has the power of automatically discovering the representations needed for detection or classification and has been successfully applied to multiple NLP tasks. A line of studies in the literature (e.g., (Labutov and Lipson, 2013; Maas et al., 2011; Tang et al., 2014b,a) aim to learn sentiment-specific word embeddings (Bengio et al., 2003; Mikolov et al., 2013) from neighboring text. Another thread of research focuses on learning semantic composition (Mitchell and Lapata, 2010), including extensions to phrases and sentences with recursive neural networks (a class of syntax-tree models) (Socher et al., 2013; Irsoy and Cardie, 2014; Li et al., 2015) and to documents with distributed representations of sentences and paragraphs (Le and Mikolov, 2014; Tang et al., 2015) for modeling sentiment. Long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Gated Recurrent Neural Nets (GRNNs) (Cho et al., 2014; Chung et al., 2015), variations of recurrent neural networks (RNNs), a type of networks suitable for handling time-series data like speech (Graves et al., 2013) or handwriting recognition (Graves, 2012; Graves and Schmidhuber, 2009), have also been used successfully for sentiment analysis (Ren et al., 2016; Liu et al., 2015; Tai et al., 2015; Tang et al., 2015; Zhang et al., 2016). Convolutional neural networks (CNNs) have also been quite successful in NLP, and have been applied to a range of sentence classification tasks, including sentiment analysis (Blunsom et al., 2014; Kim, 2014; Zhang et al., 2015). Other architectures have also been recently proposed (e.g., (Bradbury et al., 2016)). A review of neural network methods for NLP can be found in (Goldberg, 2016). 3 Data 3.1 Collection of a Large-Scale Dataset To be able to use deep learning for modeling emotion, we needed a large dataset of labeled tweets. Since there is no such human-labeled dataset publicly available, we follow (Mohammad, 2012; Mintz et al., 2009; Purver and Battersby, 2012; Gonz´alez-Ib´anez et al., 2011; Wang et al., 2012) in adopting distant supervision: We collect tweets with emotion-carrying hashtags as a surrogate for emotion labels. To be able to collect enough tweets to serve our need, we developed a list of hashtags representing each of the 24 emotions proposed by Robert Plutchick (Plutchik, 1980, 1985, 1994). Plutchik (Plutchik, 2001) organizes emotions in a three-dimensional circumplex model analogous to the colors on a color wheel. The cone’s vertical dimension represents intensity, and the 3 circle represent degrees of similarity 720 Figure 1: Plutchik’s wheel of emotion. among the various emotion types. The eight sectors are meant to capture that there are eight primary emotion dimensions arranged as four pairs of opposites. Emotions in the blank spaces are the primary emotion dyads (i.e., emotions that are mixtures of two of the primary emotions). For this work, we exclude the dyads in the exploded model from our treatment. For simplicity, we refer to the circles as plutchik-1: with the emotions {admiration, amazement, ecstasy, grief, loathing, rage, terror, vigilance}, plutchik-2: with the emotions {joy, trust, fear, surprise, sadness, disgust, anger, anticipation}, and plutchik-3: with the emotions {acceptance, annoyance, apprehension, boredom, distraction, interest, pensiveness, serenity}. The wheel is shown in Figure 1. For each emotion type, we prepared a seed set of hashtags representing the emotion. We used Google synonyms and other online dictionaries and thesauri (e.g., www.thesaurus. com) to expand the initial seed set of each emotion. We acquire a total of 665 emotion hashtags across the 24 emotion types. For example, for the joy emotion, a subset of the seeds in our expanded set is {“happy”, “happiness”, “joy”, “joyful”, “joyfully”, “delighted”, “feelingsunny”, “blithe”, “beatific”, “exhilarated”, “blissful”, “walkingonair”, “jubilant”}. We then used the expanded set to extract tweets with hashtags from the set from a number of massive-scale in-house Twitter datasets. We also used Twitter API to crawl Twitter with hashtags from the expanded set. Using this method, we were able to acquire a dataset of about 1/4 billion tweets covering an extended time span from July 2009 till January 2017. 3.2 Preprocessing and Quality Assurance Twitter data are very noisy, not only because of use of non-standard typography (which is less of a problem here) but due to the many duplicate tweets and the fact that tweets often have multiple emotion hashtags. Since these reduce our ability to build accurate models, we need to clean the data and remove duplicates. Starting with > 1/4 billion tweets, we employ a rigorous and strict pipeline. This results in a vastly smaller set of about 1.6 million dependable labeled tweets. Since our goal is to create non-overlapping categories at the level of a tweet, we first removed all tweets with hashtags belonging to more than one emotion of the 24 emotion categories. Since it was observed (e.g., (Mohammad, 2012; Wang et al., 2012)) and also confirmed by our annotation study as described in Section 4, that hashtags in tweets with URLs are less likely to correlate with a true emotion label, we remove all tweets with URLs from our data. We filter out duplicates using a two-step procedure: 1) we remove all retweets (based on existence of the token “RT” regardless of case) and 2) we use the Python library pandas http://pandas. pydata.org/ “drop duplicates” method to compare the tweet texts of all the tweets after normalizing character repetitions [all consecutive characters of > 2 to 2] and user mentions (as detected by a string starting with an “@” sign). We then performed a manual inspection of a random sample of 1,000 tweets from the data and found no evidence of any remaining tweet duplicates. Next, even though the emotion hashtags themselves are exclusively in English, we observe the data do have tweets in languages other than English. This is due to code-switching, but also to the fact that our data dates back to 2009 and Twitter did not allow use of hashtags for several non-English languages until 2012. To filter out non-English, we use the langid (Lui and Baldwin, 2012) (https://github.com/ saffsd/langid.py) library to assign language tags to the tweets. Since the common wisdom in the literature (e.g., (Mohammad, 2012; Wang et al., 2012)) is to restrict data to hashtags occurring in final position of a tweet, we investigate correlations between a tweet’s relevance and emotion hashtag location in Section 4 and test models exclusively on data with hashtags occurring in final position. We also only use tweets con721 taining at least 5 words. Table 2 shows statistics of the data after applying our cleaning, filtering, language identification, and deduplication pipeline. Since our focus is on English, we only show statistics for tweets tagged with an “en” (for “English”) label by langid. Table 2 provides three types of relevant statistics: 1) counts of all tweets, 2) counts of tweets with at least 5 words and the emotion hashtags occurring in the last quarter of the tweet text (based on character count), and 3) counts of tweets with at least 5 words and the emotion hashtags occurring as the final word in the tweet text. As the last column in Table 2 shows, employing our most strict criterion where an emotion hashtag must occur finally in a tweet of a minimal length 5 words, we acquire a total of 1,608,233 tweets: 205,125 tweets for plutchik-1, 790,059 for plutchik-2, and 613,049 for plutchik-3. 2 Emotion ct ct@lq ct@end admiration 292,153 150,509 112,694 amazement 568,255 358,472 34,826 ecstasy 54,174 34,307 23,856 grief 102,980 33,141 12,568 loathing 90,465 41,787 456 rage 30,994 11,777 4,749 terror 84,827 25,908 15,268 vigilance 6,171 1,028 708 plutchik-1 1,230,019 656,929 205,125 anger 131,082 82,447 56,472 anticipation 67,175 36,846 26,655 disgust 212,770 145,052 52,067 fear 302,989 153,513 98,657 joy 974,226 522,689 330,738 sadness 1,252,192 762,901 142,300 surprise 143,755 78,570 53,915 trust 198,619 103,332 29,255 plutchik-2 3,282,808 1,885,350 790,059 acceptance 138,899 54,706 16,522 annoyance 954,027 791,869 364,135 apprehension 29,174 11,650 7,828 boredom 872,246 583,994 152,105 distraction 122,009 52,633 617 interest 113,555 67,216 56,659 pensiveness 11,751 5,012 3,513 serenity 97,467 36,817 11,670 plutchik-3 2,339,128 1,603,897 613,049 ALL 6,851,955 4,146,176 1,608,233 Table 2: Data statistics. 4 Annotation Study In their work, (Wang et al., 2012) manually label a random sample of 400 tweets extracted with hash2The data can be acquired by emailing the first author. The distribution is in the form of tweet ids and labels, to adhere to Twitter conditions. tags in a similar way as we acquire our data and find that human annotators agree 93% of the time with the hashtag emotion type if the hashtag occurs as the last word in the tweet. We wanted to validate our use of hashtags in a similar fashion and on a bigger random sample. We had human annotators label a random sample of 5,600 tweets that satisfy our preprocessing pipeline. Manual inspection during annotation resulted in further removing a negligible 16 tweets that were found to have problems. For each of the remaining 5,584 tweets, the annotators assign a binary tag from the set {relevant, irrelevant} to indicate whether a tweet carries an emotion category as assigned using our distant supervision method or not. Annotators assigned 61.37% (n = 3, 427) “relevant” tags and 38.63% (n = 2, 157) “irrelevant” tags. Our analysis of this manually labeled dataset also supports the findings of (Wang et al., 2012): When we limit position of the emotion hashtag to the end of a tweet, we acquire 90.57% relevant data. We also find that if we relax the constraint on the hashtag position such that we allow the hashtag to occur in the last quarter of a tweet (based on a total tweet character count), we acquire 85.43% relevant tweets. We also find that only 23.20% (n = 795 out of 3, 427) of the emotion carrying tweets have the emotion hashtags occurring in final position, whereas 31.75% (n = 1, 088 out of 3, 427) of the tweets have the emotion hashtags in the last quarter of the tweet string. This shows how enforcing a final hashtag location results in loss of a considerable number of emotion tweets. As shown in Table 2, only 1, 608, 233 tweets out of a total of 6, 851, 955 tweets (% = 23, 47) in our bigger dataset have emotion hashtags occurring in final position. Overall, we agree with (Mohammad, 2012; Wang et al., 2012) that the accuracy acquired by enforcing a strict pipeline and limiting to emotion hashtags to final position is a reasonable measure for warranting good-quality data for training supervised systems, an assumption we have also validated with our empirical findings here. One advantage of using distant supervision under these conditions for labeling emotion data, as (Wang et al., 2012) also notes, is that the label is assigned by the writer of the tweet himself/herself rather than an annotator who could wrongly decide what category a tweet is. After all, emotion is a fuzzy concept and > 90% agreement as we 722 report here is higher than the human agreement usually acquired on many NLP tasks. Another advantage of this method is obviously that it enables us to acquire a sufficiently large training set to use deep learning. We now turn to describing our deep learning methods. 5 Methods For our core modeling, we use Gated Recurrent Neural Networks (GRNNs), a modern variation of recurrent neural networks (RNNs), which we now turn to introduce. For notation, we denote scalars with italic lowercase (e.g., x), vectors with bold lowercase (e.g.,x), and matrices with bold uppercase (e.g.,W). Recurrent Neural Network A recurrent neural network (RNN) is one type of neural network architecture that is particularly suited for modeling sequential information. At each time step t, an RNN takes an input vector xt ϵ IRn and a hidden state vector h t−1 ϵ IRm and produces the next hidden state h t by applying the recursive operation: ht = f (Wxt + Uht−1 + b) (1) Where the input to hidden matrix W ϵ IRmxn, the hidden to hidden matrix U ϵ IRmxm, and the bias vector b ϵ IRm are parameters of an affine transformation and f is an element-wise nonlinearity. While an RNN can in theory summarize all historical information up to time step ht, in practice it runs into the problem of vanishing/exploding gradients (Bengio et al., 1994; Pascanu et al., 2013) while attempting to learn longrange dependencies. LSTM Long short-term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) addresses this exact problem of learning long-term dependencies by augmenting an RNN with a memory cell ct ϵ IRn at each time step. As such, in addition to the input vector xt, the hiddent vector ht−1, an LSTM takes a cell state vector ct−1 and produces ht and ct via the following calculations: it = σ Wixt + Uiht−1 + bi ft = σ  Wfxt + Ufht−1 + bf ot = σ (Woxt + Uoht−1 + bo) gt = tanh (Wgxt + Ught−1 + bg) ct = ft ⊙ct−1 + it ⊙gt ht = ot ⊙tanh(ct) (2) Where σ(·) and tanh(·) are the element-wise sigmoid and hyperbolic tangent functions, ⊙the element-wise multiplication operator, and it, ft, ot are the input, forget, and output gates. The gt is a new memory cell vector with candidates that could be added to the state. The LSTM parameters Wj, Uj, and bj are for j ϵ {i, f, o, g}. GRNNs (Cho et al., 2014; Chung et al., 2015) propose a variation of LSTM with a reset gate rt, an update state zt, and a new simpler hidden unit ht, as follows: rt = σ (Wrxt + Urht−1 + br) zt = σ (Wzxt + Uzht−1 + bz) ˜ht = tanh  Wxt + rt ∗U ˜hht−1 + b ˜h ht = zt ∗ht−1 + (1 −zt) ∗˜ht (3) The GRNN parameters Wj, Uj, and bj are for j ϵ {r, z, ˜h}. In this set up, the hidden state is forced to ignore a previous hidden state when the reset gate is close to 0, thus enabling the network to forget or drop irrelevant information. Additionally, the update gate controls how much information carries over from a previous hidden state to the current hidden state (similar to an LSTM memory cell). We use GRNNs as they are simpler and faster than LSTM. For GRNNs, we use Theano (Theano Development Team, 2016). Online Classifiers We compare the performance of the GRNNs to four online classifiers that are capable of handling the data size: Stochastic Gradient Descent (SGD), Multinomial Naive Bayes (MNB), Perceptron, and the Passive Agressive Classifier (PAC). These classifiers learn online from mini-batches of data. We use minibatches of 10,000 instances with all the four classifiers. We use the scikit-learn implementation of these classifiers (http://scikit-learn. org). Settings We aim to model Plutchik’s 24 finegrained emotions as well as his 8 primary emotion dimensions where each 3 related types of emotion (perceived as varying in intensity) are combined in one dimension. We now turn to describing our experiments experiments. 6 Experiments 6.1 Predicting Fine-Grained Emotions As explained earlier, Plutchik organizes the 24 emotion types in the 3 main circles that we will refer to as plutchik-1, plutchik-2, and plutchik-3. 723 Emotion Qadir (2013) Roberts (2012) MD (2015) Wang (2012) Volkova (2016) This work anger 400 0.44 583 0.64 1,555 0.28 457,972 0.72 4,963 0.80 56,472 0.75 anticip 26,655 0.70 disgust 922 0.67 761 0.19 12,948 0.92 52,067 0.82 fear 592 0.54 222 0.74 2,816 0.51 11,156 0.44 9,097 0.77 98,657 0.74 joy 1,005 0.59 716 0.68 8,240 0.62 567,487 0.72 15,559 0.79 330,738 0.91 sadness 560 0.46 493 0.69 3,830 0.39 489,831 0.65 4,232 0.62 142,300 0.73 surprise 324 0.61 3849 0.45 1,991 0.14 8,244 0.64 53,915 0.86 trust 29,255 0.82 ALL 4,500 0.53 3,777 0.67 21,051 0.49 1,991,184 52,925 0.78 790,059 0.83 Table 6: Comparison (in F-score) of our results with GRNNs to published literature. MD = Mohammad (2015). Note: For space restrictions, we take the liberty of using the last name of only the first author of each work. Emotion SGD MNB PRCPTN PAC baseline 60.00 60.00 60.00 60.00 admiration 78.30 78.01 74.24 79.86 amazement 37.57 35.71 42.51 46.69 ecstasy 51.53 51.89 47.37 53.53 grief 38.64 36.94 37.33 48.10 loathing 0.00 0.00 2.09 2.99 rage 3.47 4.49 14.02 17.04 terror 33.23 44.12 40.48 47.00 vigilance 2.53 2.56 5.52 8.42 plutchik-1 60.26 60.54 59.11 64.86 anger 19.41 13.84 24.54 29.26 anticipation 7.46 12.63 17.29 26.70 disgust 29.51 29.87 31.83 36.60 fear 21.45 25.49 30.41 33.59 joy 72.83 72.96 72.32 75.50 sadness 50.04 51.72 39.58 49.21 surprise 8.46 4.75 17.34 19.54 trust 42.09 38.52 44.48 47.51 plutchik-2 48.05 48.33 48.60 53.30 acceptance 0.12 2.74 13.98 13.04 annoyance 80.28 80.71 78.80 81.47 apprehension 0.80 0.00 9.72 10.66 boredom 49.53 51.27 52.02 57.84 distraction 0.00 2.99 3.42 0.00 interest 21.69 30.45 34.85 44.14 pensiveness 2.61 8.08 11.22 12.27 serenity 8.87 19.57 27.23 38.59 plutchik-3 62.20 64.00 64.04 68.14 ALL 56.84 57.62 57.25 62.10 Table 3: Results in F-score with traditional online classifiers. We model the set of emotions belonging to each of the 3 circles independently, thus casting each as an 8-way classification task. Inspired by observations from the literature and our own annotation study, we limit our data to tweets of at least 5 words with an emotional hashtag occurring at the end. We then split the data representing each of the 3 circles into 80% training (TRAIN), 10% development (DEV), and 10% testing (TEST). As mentioned above, we run experiments with a range of online, out-of-core classifiers as well as the GRNNs. To train the GRNNs, we optimize the hyper-parameters of the network on a development set as we describe below, choosing a vocabulary size of 80K words (a vocabulary size we also use for the out-of-core classifiers), a word embedding vector of size 300 dimensions learnt directly from the training data, an input maximum length of 30 words, 7 epochs, and the Adam (Kingma and Ba, 2014) optimizer with a learning rate of 0.001. We use 3 dense layers each with 1, 000 units. We use dropout (Hinton et al., 2012) for regularization, with a dropout rate of 0.5. For our loss function, we use categorical cross-entropy. We use a minibatch (Cotter et al., 2011) size of 128. We found this architecture to work best with almost all the settings and so we fix it across the board for all experiments with GRNNs. Results with Traditional Classifiers Results with the online classifiers are presented in terms of F-score in Table 3. As the table shows, among this group of classifiers, the Passive Agressive classifier (PAC) acquires the best performance. PAC achieves an overall F-score of 64.86% on plutchik-1, 53.30% on plutchik-2, and 68.14% on plutchik-3, two of which are higher than an arbitrary baseline3 of 60%. Results with GRNNs Table 4 presents results with GRNNs, compared with the best results using the traditional classifiers as acquired with PAC. As the table shows, the GRNN models are very successful across all the 3 classification tasks. With GRNNs, we acquire an overall F-scores of: 91.21% on plutchik-1, 82.32% on plutchik-2, and 87.47% on plutchik-3. These results are 26.35%, 29.02%, and 25.37% higher than PAC, respectively. Negative Results We experiment with aug3The arbitrary baseline is higher than the majority class in the training data in any of the 3 cases. 724 PAC GRNNs Emotion f-score prec rec f-score admiration 79.86 94.53 95.28 94.91 amazement 46.69 90.44 89.02 89.73 ecstasy 53.53 83.49 90.01 86.62 grief 48.10 85.07 81.13 83.05 loathing 2.99 83.87 54.17 65.82 rage 17.04 80.00 75.11 77.48 terror 47.00 91.15 84.01 87.44 vigilance 8.42 71.93 70.69 71.30 plutchik-1 64.86 91.26 91.24 91.21 anger 29.26 74.95 69.20 71.96 anticipation 26.70 70.05 69.00 69.52 disgust 36.60 82.18 68.84 74.92 fear 33.59 73.74 72.51 73.12 joy 75.50 90.96 93.88 92.40 sadness 49.21 73.20 82.04 77.37 surprise 19.54 85.60 67.40 75.42 trust 47.51 82.43 76.83 79.53 plutchik-2 53.30 82.53 82.46 82.32 acceptance 13.04 77.10 71.76 74.33 annoyance 81.47 91.46 95.01 93.20 apprehension 10.66 80.40 61.07 69.41 boredom 57.84 85.95 84.40 85.16 distraction 0.00 87.50 25.00 38.89 interest 44.14 86.79 78.38 82.37 pensiveness 12.27 91.87 43.24 58.80 serenity 38.59 82.15 78.16 80.11 plutchik-3 68.14 88.94 89.08 88.89 ALL 62.10 87.58 87.59 87.47 Table 4: Results with GRNNs across Plutchik’s 24 emotion categories. We compare to bestperforming traditional classifier (i.e. Passive Aggressive). menting training data reported here in two ways: 1) For each emotion type, we concatenate the training data with training data of tweets that are more (or less) intense from the same sector/dimension in the wheel, and 2) for each emotion type, we add tweets where emotion hashtags occur in the last quarter of a tweet (which were originally filtered out from TRAIN). However, we gain no improvements based on either of these methods, thus reflecting the importance of using high-quality training data and the utility of our strict pipeline. 6.2 Predicting 8 Primary Dimensions We now investigate the task of predicting each of the 8 primary emotion dimensions represented by the sectors of the wheel (where the three degrees of intensity of a given emotion are reduced to a single emotion dimension [e.g., {ecstasy, joy, serenity} are reduced to the joy dimension]). We concatenate the 80% training data (TRAIN) from each of the 3 circles’ data into a single training set Dimension prec rec f-score anger 97.40 97.72 97.56 anticipation 91.18 89.95 90.56 disgust 96.20 93.94 95.06 fear 94.97 94.38 94.68 joy 94.61 96.40 95.50 sadness 95.52 95.25 95.39 surprise 94.99 91.62 93.27 trust 96.36 97.58 96.96 All 95.68 95.68 95.68 Table 5: GRNNs results across 8 emotion dimensions. Each dimension represents three different emotions. For example, the joy dimension represents serenity, joy and ecstasy. Emotion Volkova (2016) model This work anger 12.38 74.95 disgust 5.71 82.18 fear 11.18 73.74 joy 44.57 90.96 sadness 18.04 73.20 surprise 5.33 85.60 ALL 26.95 80.12 Table 7: Comparison (in acc) to (Volkova and Bachrach, 2016)’s model. (TRAIN-ALL), the 10% DEV to form DEV-ALL, and the 10% TEST to form TEST-ALL. We test a number of hyper-parameters on DEV and find the ones we have identified on the fine-grained prediction to work best and so we adopt them as is with the exception of limiting to only 2 epochs. We believe that with a wider exploration of hyperparameters, improvements could be possible. As Table 5 shows, we are able to model the 8 dimensions with an overall superior accuracy of 95.68%. As far as we know, this is the first work on modeling these dimensions. 7 Comparisons to Other Systems We compare our results on the 8 basic emotions to the published literature. As Table 6 shows, on this subset of emotions, our system is 4.53% (acc) higher than the best published results (Volkova and Bachrach, 2016), facilitated by the fact that we have an order of magnitude more training data. As shown in Table 7, we also apply (Volkova and Bachrach, 2016)’s pre-trained model on our test set of the 6 emotions they predict (which belong to plutchik-2), and acquire an overall accuracy of 26.95%, which is significantly lower than our accuracy. 725 8 Conclusion In this paper, we built a large, automatically curated dataset for emotion detection using distant supervision and then used GRNNs to model finegrained emotion, achieving a new state-of-the-art performance. We also extended the classification to 8 primary emotion dimensions situated in psychological theory of emotion. References Conor M. Steckler Aaron C. Weidman and Jessica L. Tracy. 2017. The jingle and jangle of emotion assessment: Imprecise measurement, casual scale usage, and conceptual fuzziness in emotion research. Emotion . Saima Aman and Stan Szpakowicz. 2007. Identifying expressions of emotion in text. In Text, Speech and Dialogue. Springer, pages 196–205. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research 3(Feb):1137–1155. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks 5(2):157–166. Phil Blunsom, Edward Grefenstette, and Nal Kalchbrenner. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2016. Quasi-recurrent neural networks. arXiv preprint arXiv:1611.01576 . Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Junyoung Chung, Caglar G¨ulc¸ehre, Kyunghyun Cho, and Yoshua Bengio. 2015. Gated feedback recurrent neural networks. In ICML. pages 2067–2075. Andrew Cotter, Ohad Shamir, Nati Srebro, and Karthik Sridharan. 2011. Better mini-batch algorithms via accelerated gradient methods. In Advances in neural information processing systems. pages 1647–1655. Munmun De Choudhury, Scott Counts, and Michael Gamon. 2012. Not all moods are created equal! exploring human emotional states in social media. P. Ekman. 1972. Universal and cultural differences in facial expression of emotion. Nebraska Symposium on Motivation pages 207–283. Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford 1(12). Yoav Goldberg. 2016. A primer on neural network models for natural language processing. Journal of Artificial Intelligence Research 57:345–420. Roberto Gonz´alez-Ib´anez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers-Volume 2. Association for Computational Linguistics, pages 581–586. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. MIT Press. Alex Graves. 2012. Supervised sequence labelling. In Supervised Sequence Labelling with Recurrent Neural Networks, Springer, pages 5–13. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. IEEE, pages 6645–6649. Alex Graves and J¨urgen Schmidhuber. 2009. Offline handwriting recognition with multidimensional recurrent neural networks. In Advances in neural information processing systems. pages 545–552. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580 . Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Hyisung C Hwang and David Matsumoto. 2016. Emotional expression. The Expression of Emotion: Philosophical, Psychological and Legal Perspectives page 137. Ozan Irsoy and Claire Cardie. 2014. Deep recursive neural networks for compositionality in language. In Advances in Neural Information Processing Systems. pages 2096–2104. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 . Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . 726 Igor Labutov and Hod Lipson. 2013. Re-embedding words. In ACL (2). pages 489–493. Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In ICML. volume 14, pages 1188–1196. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521(7553):436–444. Jiwei Li, Minh-Thang Luong, Dan Jurafsky, and Eudard Hovy. 2015. When are tree structures necessary for deep learning of representations? arXiv preprint arXiv:1503.00185 . Pengfei Liu, Xipeng Qiu, Xinchi Chen, Shiyu Wu, and Xuanjing Huang. 2015. Multi-timescale long shortterm memory neural network for modelling sentences and documents. In EMNLP. Citeseer, pages 2326–2335. Marco Lui and Timothy Baldwin. 2012. langid. py: An off-the-shelf language identification tool. In Proceedings of the ACL 2012 system demonstrations. Association for Computational Linguistics, pages 25–30. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 142–150. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Hlt-naacl. volume 13, pages 746–751. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2. Association for Computational Linguistics, pages 1003–1011. Gilad Mishne and Maarten De Rijke. 2006. Capturing global mood levels using blog posts. In AAAI spring symposium: computational approaches to analyzing weblogs. pages 145–152. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive science 34(8):1388–1429. Saif M Mohammad. 2012. #emotional tweets. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation. Association for Computational Linguistics, pages 246–255. Saif M Mohammad and Svetlana Kiritchenko. 2015. Using hashtags to capture fine emotion categories from tweets. Computational Intelligence 31(2):301–326. Thin Nguyen. 2010. Mood patterns and affective lexicon access in weblogs. In Proceedings of the ACL 2010 Student Research Workshop. Association for Computational Linguistics, pages 43–48. Alexander Pak and Patrick Paroubek. 2010. Twitter as a corpus for sentiment analysis and opinion mining. In LREc. volume 10. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. ICML (3) 28:1310–1318. Robert Plutchik. 1980. Emotion: A psychoevolutionary synthesis. Harpercollins College Division. Robert Plutchik. 1985. On emotion: The chickenand-egg problem revisited. Motivation and Emotion 9(2):197–200. Robert Plutchik. 1994. The psychology and biology of emotion.. HarperCollins College Publishers. Robert Plutchik. 2001. The nature of emotions human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American scientist 89(4):344–350. Matthew Purver and Stuart Battersby. 2012. Experimenting with distant supervision for emotion classification. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 482–491. Jonathon Read. 2005. Using emoticons to reduce dependency in machine learning techniques for sentiment classification. In Proceedings of the ACL student research workshop. Association for Computational Linguistics, pages 43–48. Yafeng Ren, Yue Zhang, Meishan Zhang, and Donghong Ji. 2016. Context-sensitive twitter sentiment classification using neural network. In AAAI. pages 215–221. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, Christopher Potts, et al. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP). Citeseer, volume 1631, page 1642. Carlo Strapparava and Rada Mihalcea. 2007. Semeval2007 task 14: Affective text. In Proceedings of the 4th International Workshop on Semantic Evaluations. Association for Computational Linguistics, pages 70–74. 727 Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075 . Yuki Tanaka, Hiroya Takamura, and Manabu Okumura. 2005. Extraction and classification of facemarks. In Proceedings of the 10th international conference on Intelligent user interfaces. ACM, pages 28–34. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1422–1432. Duyu Tang, Furu Wei, Bing Qin, Ming Zhou, and Ting Liu. 2014a. Building large-scale twitter-specific sentiment lexicon: A representation learning approach. In COLING. pages 172–182. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014b. Learning sentimentspecific word embedding for twitter sentiment classification. In ACL (1). pages 1555–1565. Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints abs/1605.02688. http://arxiv.org/abs/1605.02688. Svitlana Volkova and Yoram Bachrach. 2016. Inferring perceived demographics from user emotional tone and user-environment emotional contrast. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL. Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit P Sheth. 2012. Harnessing twitter” big data” for automatic emotion identification. In Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on and 2012 International Confernece on Social Computing (SocialCom). IEEE, pages 587–592. Jasy Liew Suet Yan and Howard R Turtle. 2016a. Exploring fine-grained emotion detection in tweets. In Proceedings of NAACL-HLT. pages 73–80. Jasy Liew Suet Yan and Howard R Turtle. 2016b. Exposing a set of fine-grained emotion categories from tweets. In 25th International Joint Conference on Artificial Intelligence. page 8. Jasy Liew Suet Yan, Howard R Turtle, and Elizabeth D Liddy. 2016. Emotweet-28: A fine-grained emotion corpus for sentiment analysis . Changhua Yang, Kevin Hsin-Yih Lin, and Hsin-Hsi Chen. 2007. Emotion classification using web blog corpora. In Web Intelligence, IEEE/WIC/ACM International Conference on. IEEE, pages 275–278. Meishan Zhang, Yue Zhang, and Duy-Tin Vo. 2016. Gated neural networks for targeted sentiment analysis. In AAAI. pages 3087–3093. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems. pages 649–657. 728
2017
67
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 729–740 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1068 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 729–740 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1068 Beyond Binary Labels: Political Ideology Prediction of Twitter Users Daniel Preot¸iuc-Pietro Positive Psychology Center University of Pennsylvania [email protected] Ye Liu∗ School of Computing National University of Singapore [email protected] Daniel J. Hopkins Political Science Department University of Pennsylvania [email protected] Lyle Ungar Computing & Information Science University of Pennsylvania [email protected] Abstract Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users’ political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral users – groups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups. 1 Introduction Social media is used by people to share their opinions and views. Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies. In addition, political membership is also predictable purely from one’s interests or demographics — it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012). ∗Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user’s demographics, psychological states or preferences. Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b), gender (Burger et al., 2011; Sap et al., 2014), personality (Schwartz et al., 2013; Preot¸iuc-Pietro et al., 2016), socioeconomic status (Preot¸iuc-Pietro et al., 2015a,b; Liu et al., 2016c), popularity (Lampos et al., 2014) or location (Cheng et al., 2010). Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015). However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007). For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (BarberASa, 2015). Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online. Dichotomous political preference also ignores users who do not have a political ideology. All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015). The most common political ideology spectrum in the US is the conservative – liberal (Ellis and Stimson, 2012). We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 729 1. Uncover the differences in language use between ideological groups; 2. Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum. First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter. In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work. In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified. Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement. Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.1 2 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning. Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling. To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013), audio (Alam and Riccardi, 2014), text (Preot¸iuc-Pietro et al., 2015a), profile images (Liu et al., 2016a), social data (Van Der Heide et al., 2012; Hall et al., 2014), social networks (Perozzi and Skiena, 2015; Rout et al., 2013), payment data (Wang et al., 2016) and endorsements (Kosinski et al., 2013). Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016). First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008), hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014), 1Data is available at http://www.preotiuc.ro or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level. Our study belongs to the second category, where political orientation is inferred at a user-level. All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010), social network connections (Zamal et al., 2012), platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014), with very high reported accuracies of up to 94.9% (Conover et al., 2011). However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology. In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways. These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011), supporting partisan causes (Rao et al., 2010), by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011). As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online – fewer than 5% according to Priante et al. (2016) – and those that state their preference are very likely to be political activists. Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation. Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; Carpenter et al., 2016). Further, they still only look at predicting binary political orientation. To date, no other research on this topic has taken into account these findings. 3 Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D1). Each participant was compensated 730 1 2 3 4 5 6 7 0 250 500 750 1000 Political Orientation Figure 1: Distribution of political ideology in our data set, from 1 – Very Conservative through 7 – Very Liberal. with 3 USD for 15 minutes of their time. All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question. They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7). In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative – liberal spectrum and were removed from our analysis (399 users). We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age. Participants were all from the US in order to limit the impact of cultural and political factors. The political ideology distribution in our sample is presented in Figure 1. We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets. Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user’s own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey. This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania. In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D2). We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson). Liberals in our set (Nl = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures. Likewise, conservative users (Nc = 6234) had to follow all of the conservative figures and no liberal figures. We downloaded up to 3,200 of each user’s most recent tweets, leading to a total of 25,493,407 tweets. All tweets were downloaded around 10 August 2016. 4 Features In our analysis, we use a broad range of linguistic features described below. Unigrams We use the bag-of-words representation to reduce each user’s posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words). LIWC Traditional psychological studies use a dictionary-based approach to representing text. The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001), and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory. These include different parts-of-speech, topical categories and emotions. Each user is thereby represented as a frequency distribution over these categories. Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar. The clusters help reducing the feature space and provides additional interpretability. To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954). Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990). We use the method from (Preot¸iuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000). We have tried other alternatives to building clusters: using other word similarities to 731 generate clusters – such as NPMI (Lampos et al., 2014) or GloVe (Pennington et al., 2014) as proposed in (Preot¸iuc-Pietro et al., 2015a) – or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003). For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters. We aggregate all the words posted in a users’ tweets and represent each user as a distribution of the fraction of words belonging to each cluster. Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts. The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise. We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment (Mohammad and Turney, 2010, 2013). Using these lexicons, we assign a predicted emotion to each message and then average across all users’ posts to obtain user level emotion expression scores. Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20). This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors. 5 Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups. To illustrate differences between ideological groups we compare the two political extremes (Very Conservative – Very Liberal) and the political moderates (Moderate Conservative – Moderate Liberal). We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative–liberal leaning. We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics. For example, in D1, users who reported themselves as very conservative are older and more likely males (µage = 35.1, pctmale = 44%) than the data average (µage = 31.2, pctmale = 35%). Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender. In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012). Results with unigrams are presented in Figure 2 and with the other features in Table 1. These are selected using standard statistical significance tests. 5.1 Very Conservatives vs. Very Liberals The comparison between the extreme categories reveals the largest number of significant differences. The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms (‘praying’, ‘god’, W2V485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships (‘uncle’, ‘son’, L-FAMILY), another conservative value (Lakoff, 1997). The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate ‘conservative’ with ‘religious’ (Ellis and Stimson, 2012). Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts. Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, EmotPositive), confirming a previously hypothesised relationship (Napier and Jost, 2008). However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010). Political term analysis reveals the partisan terms 732 (a) V.Con.(1) vs. V.Lib.(7) (c) M.Con.(3) vs. M.Lib.(5) (e) Moderates (4) vs. V.Con.(1) + V.Lib.(7) (b) V.Con.(1) vs. V.Lib.(7) (d) M.Con.(3) vs. M.Lib.(5) (f) Moderates (4) vs. V.Con.(1) + V.Lib.(7) Figure 2: Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared. The size of the unigram is scaled by its correlation with the ideological group in bold. The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used). All correlations are significant at p < .05 and controlled for age and gender. r Category Words r Category Words V.Con.(1) vs. V.Lib.(7) V.Con.(1) vs. V.Lib.(7) .249 W2V–485 god, peace, thankful, pray, bless, blessed, prayers, praying .236 W2V–075 bad, kind, weird, kinda, horrible, creepy, strange, extremely .180 W2V–018 jesus, lord, christ, sin, grace, god’s, praise, gods, glory, thou .195 W2V–238 an, excuse, actual, idiot, asshole, example, absolute .156 W2V–099 church, bible, serve, worship, preach, christians, pastor .192 W2V–487 into, through, must, myself, decided, completely, upon .140 W2V–491 soooo, soo, sooooo, soooooo, tooo, sooooooo, toooo .191 W2V–110 quite, awful, exciting, brilliant, perfectly, usual .119 W2V–027 kno, yu, abt, tht, dnt, wut, tru, somethin, ion, wen .186 W2V–448 off, almost, whole, literally, entire, basically, ridiculous .204 L–RELIG god, hell, holy, soul, pray, angel, praying, christ, sin, amen .175 L–ANX awkward, worry, scared, fear, afraid, horrible, scary, upset .145 L–POSEMO love, good, lol, :), great, happy, best, thanks, win, free .164 L–ADVERB just, so, when, about, now, how, too, why, back, really .127 L–FAMILY baby, family, mom, dad, son, bro, mother, babies, fam, folks .161 L–CONJ and, so, but, if, when, how, as, or, because, then .118 L–NETSPEAK rt, u, lol, :), twitter, gonna, yo, ur, omg, ya .147 L–COMPARE like, more, as, best, than, better, after, most, before, same .101 L–YOU you, your, u, you’re, ur, ya, yourself, youre, you’ll, you’ve .138 L–DIFFER not, but, if, or, really, can’t, than, other, didn’t, actually .152 Emot–Joy love, good, happy, hope, god, birthday, fun, favorite, pretty .086 Emot–Positive love, good, happy, hope, god, birthday, real, fun, favorite .107 Emot–Surprise good, hope, birthday, excited, money, finally, chance, guess .132 → .068 Political Terms #pjnet, #tcot, @foxnews, polls, @realdonaldtrump, @tedcruz, @yahoonews .161 → .090 Political Terms gay, sanders, racism, racist, rape, @barackobama, democracy, feminist, democratic, protesting, protest, bernie, feminism, protesters, transgender M.Con.(3) vs. M.Lib.(5) M.Con.(3) vs. M.Lib.(5) .108 W2V–485 god, peace, thankful, pray, bless, blessed, prayers, praying .116 W2V–458 hilarious, celeb, capaldi, corrie, chatty, corden, barrowman .088 W2V–018 jesus, lord, christ, sin, grace, god’s, praise, gods, glory, thou .106 W2V–373 photo, art, pictures, photos, instagram, photoset, image .085 W2V–214 frank, savage, brad, ken, kane, pitt, watson, leonardo .106 W2V–316 hot, sex, naked, adult, teen, porn, lesbian, tube, tits .085 W2V–436 luck, lucky, boss, sir, c’mon, mate, bravo, ace, pal, keeper .087 W2V–024 turn, accidentally, barely, constantly, onto, bug, suddenly .086 W2V–389 ha, ooo, uh, ohhh, ohhhh, ma’am, gotcha, gee, ohhhhh .096 L–RELIG god, hell, holy, soul, pray, angel, praying, christ, sin, amen .104 L–SEXUAL fuck, gay, sex, sexy, dick, naked, fucks, cock, aids, cum .093 L–DRIVES love, good, lol, :), great, happy, best, thanks, win, free .088 L–ANGER hate, fuck, hell, stupid, mad, sucks, suck, war, dumb, ugly .093 L–WE we, our, us, let’s, we’re, lets, we’ll, we’ve, ourselves, we’d .087 L–AFFILIATION love, we, our, use, help, twitter, friends, family, join, friend .086 Emot–Joy love, good, happy, hope, god, birthday, fun, favorite, pretty .097 Emot–Disgust bad, hate, shit, finally, damn, feeling, hell, bitch, boy, sick .096 Political Terms islamic .136 Political Terms rape .086 rights Moderates (4) vs. V.Con.(1)+V.Lib.(7) Moderates (4) vs. V.Con.(1)+V.Lib.(7) .084 W2V–098 girls, boys, em, ladies, bitches, hoes, grown, dudes, dem .191 W2V–309 obama, president, scott, hillary, romney, clinton, ed, sarah .188 W2V–237 freedom, violence, revolution, muslim, muslims, terrorists .184 W2V–269 bill, rights, congress, gop, republicans, republican, passes .174 W2V–296 justice, rule, crusade, civil, pope, plot, humanity, terror .160 W2V–288 law, general, legal, safety, officer, emergency, agent .120 L–POWER up, best, over, win, down, help, god, big, high, top .103 L–RELIG god, hell, holy, soul, pray, angel, praying, christ, sin, amen .100 L–ARTICLE the, a, an .089 L–DEATH dead, die, died, war, alive, dying, wars, dies, buried, bury .083 L–RISK bad, stop, wrong, worst, lose, trust, safe, worse, losing .118 Emot–Fear watch, bad, god, hate, change, feeling, hell, crazy, bitch, die .094 Emot–Disgust bad, hate, shit, finally, damn, feeling, hell, bitch, boy, sick .086 Emot–Negative wait, bad, hate, shit, black, damn, ass, wrong, vote, feeling .084 Emot–Sadness bad, hate, music, black, vote, feeling, hell, crazy, lost, bitch .181 → .103 Political Terms obama, liberal, president, government, senators, bernie, law, #demdebate, same-sex, feminist, congress, republicans, clinton, gay, #p2, iran, activists, bush, sanders, obamacare, terrorists, justice, debate, republican, #obamacare, @moveon, @barackobama, #tcot, democrats, politics, ... Table 1: Pearson correlations between political ideology groups and text features, split into Word2Vec clusters (W2V), LIWC categories (L), emotions (Emot) and political terms (maximum 5 categories per group). All correlations are significant at p < .01, two-tailed t-test and are controlled for age and gender. Words in a category are sorted by frequency in our data set. employed by both sides. For example, conservatives retweet or mention politicians such as Donald Trump or Ted Cruz, while liberals mention Barack Obama. Extreme conservatives also reference known partisan conservative media sources (@foxnews, @yahoonews) and hashtags (#pjnet, 733 #tcot), while extreme liberals focus on issues (‘gay’, ‘racism’, ‘feminism’, ‘transgender’). This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform. Liberals, by contrast, use the platform to discuss and popularize their causes. 5.2 Moderate Conservatives vs. Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies. While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE). Moderate liberals are identified by very different features compared to their more extreme counterparts. Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015). Two word clusters relating to British culture (W2V-458) and art (W2V373) reflect that liberals are more inclined towards arts (Dollinger, 2007). Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later. 5.3 Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement. Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users. However, regardless of their orientation, the ideological extremists stand out from moderates. They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V296, W2V-288). LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets. The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates. This reveals – combined with the finding from the first comparison – that while extreme conservatives are overall more positive than liberals, both groups share negative expression. Political terms are almost all significantly correlated with the extreme ideological groups, 2.64 0.76 0.55 0.42 0.36 0.46 0.51 0.76 2.95 0.73 0.24 0.14 0.07 0.07 0.09 0.12 0.19 0.79 0.11 0.03 0.03 0.02 0.02 0.03 0.03 0.04 0.18 0.00 0.50 1.00 1.50 2.00 2.50 3.00 D2: Con. V.Con.(1) Con.(2) M.Con.(3) Mod.(4) M.Lib.(5) Lib.(6) V.Lib.(7) D2: Lib. Political words Political NEs Media NEs Figure 3: Distribution of political word and entity usage across political categories in % from the total words used. Users from data set D2 who are following the accounts of the four political figures are prefixed with D2. The rest of the categories are from data set D1. confirming the existence of a difference in political engagement which we study in detail next. 5.4 Political Terms Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D1 and the two political groups from D2. We notice the following: • D2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D1; • Within the groups in D1, we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1–2/6–7 is larger than 2–3/5–6. The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter’s political discussions an unrepresentative, extremist hue (Fiorina, 1999). It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016). 6 Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work. 734 6.1 Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D1 and between the two polarized groups in D2. We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train–test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daum´e III, 2007) as a proof on concept on the effects of using additional distantly supervised data. Data pooling lead to worse results than EasyAdapt. Each of the three tasks from D1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks. The results with both sets of features show that: • Prediction performance is much higher for D2 than for D1, with the more extreme groups in D1 being easier to predict than the moderate groups. This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013). We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D2 and Word2Vec word clusters performs significantly worse on D1 tasks even if the training data is over 10 times larger. When using political words, the D2 trained classifier performs relatively well on all tasks from D1; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data). Train Test 1v7 2v6 3v5 D2 1v7 .785 .639 (.681) .575 (.598) .705 (.887) 2v6 .729 (.789) .662 .574 (.586) .663 (.889) 3v5 .618 (.778) .617 (.690) .581 .684 (.887) D2 .708 (.764) .627 (.644) .571 (.574) .891 (a) Word2Vec 500 Train Test 1v7 2v6 3v5 D2 1v7 .785 .657 (.679) .589 (.616) .928 (.976) 2v6 .739 (.773) .679 .593 (.612) .920 (.976) 3v5 .727 (.766) .636 (.670) .590 .891 (.976) D2 .766 (.789) .677 (.683) .625 (.613) .972 (b) Political Terms Table 2: Prediction results of the logistic regression classification in ROC AUC when discriminating between two political groups across different levels of engagement and both data sets. The binary classifier from data set D2 is represented by D2, the rest of the categories are from data set D1. Results on the principal diagonal represent 10-fold crossvalidation results (training in-domain). Results off-diagonal represent training the classifier from the column and testing on the problem indicated in the row (training out-of-domain). Numbers in brackets indicate performance when the training data was added in the 10-fold cross-validation setup using the EasyAdapt algorithm (domain adaptation). Best results without domain adaptation are in bold, while the best results with domain adaptation are in italics. 6.2 Political Leaning and Engagement Prediction Political leaning (Conservative – Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression. In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side. Thus, we merge users from classes 3–5, 2–6, 1–7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users. We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011). To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time. For all our methods we tune the parameters of our models on a separate validation fold. The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score. Results are presented in Table 3. 735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE). Features # Feat. Political Leaning Political Engagement Unigrams 6060 .294 .165 LIWC 73 .286 .149 Word2Vec Clusters 500 .300 .169 Emotions 8 .145 .079 Political Terms 234 .256 .169 All (Ensemble) 5 .369 .196 Table 3: Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup. Political leaning is represented on the 1–7 scale removing the moderates (4). Political engagement is a scale ranging from 4 through 3–5 and 2–6 to 1–7. The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement. Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks. For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy. This result is expected based on the results from Figure 3, which showed how political term usage varies across groups, and how it is especially dependent on political engagement. While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation. Combining all classifiers’ predictions in a linear ensemble obtains best results when compared to each individual category. 6.3 Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups. For example, extreme conservatives and liberals both demonstrate strong political engagement. Therefore, this class structure can be exploited to improve classification performance. To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework. In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification. Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related Method Accuracy Baseline 19.6% LR 22.2% GR–Engagement 24.2% GR–Leaning 26.2% GR–Learnt 27.6% Table 4: Experimental results for seven-way classification using multi-task learning (GR–Engagement, GR–Leaning, GR-Learnt) and 500 Word2Vec clusters as features. tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015, 2016b,d). The group structure is encoded into a matrix R which codes the groups which are considered similar. The objective of the sparse graph regularized multi-task learning problem is: min W,c τ X t=1 N X i=1 log(1 + exp(−Yt,i(WT i,tXt,i + ct))) + γ∥WR∥2 F + λ∥W∥1, where τ is the number of tasks, |N| the number of samples, X the feature matrix, Y the outcome matrix, Wi,t and ct is the model for task t and R is the structure matrix. We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e. 1–7, 2–6, 3–5); (2) codes that groups from each ideological side are similar (i.e. 1–2, 1–3, 2–3, 5–6, 5–7, 6–7); (3) learnt from the data. Results are presented in Table 4. Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point. The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR–Leaning) obtaining 2% in accuracy higher than the political engagement one (GR– Engagement) and the learnt matrix (GR–Learnt) obtaining best results. 7 Conclusions This study analyzed user-level political ideology through Twitter posts. In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports. We showed that users in our data set are far less 736 likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported. We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning. Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users. With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010). Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010). In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all. While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance. In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement. Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions. Acknowledgments The authors acknowledge the support of the Templeton Religion Trust, grant TRT-0048. We wish to thank Prof. David S. Rosenblum for supporting the research visit of Ye Liu. References Alan I Abramowitz. 2010. The Disappearing Center: Engaged Citizens, Polarization, and American Democracy. Yale University Press. Firoj Alam and Giuseppe Riccardi. 2014. Predicting Personality Traits using Multimodal Information. In Workshop on Computational Personality Recognition (WCPR). MM, pages 15–18. Stephen Ansolabehere, Jonathan Rodden, and James M Snyder. 2008. The strength of issues: Using multiple measures to gauge preference stability, ideological constraint, and issue voting. American Political Science Review 102(02):215–232. Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. 2007. Multi-task Feature Learning. In Advances in Neural Information Processing Systems. NIPS, pages 41–49. Joseph Bafumi and Michael C Herron. 2010. Leapfrog Representation and Extremism: A Study of American Voters and their Members in Congress. American Political Science Review 104(03):519–542. Pablo BarberASa. 2015. Birds of the Same Feather Tweet Together: Bayesian Ideal Point Estimation using Twitter Data. Political Analysis 23(1):76–91. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research 3:993–1022. David E Broockman. 2016. Approaches to Studying Policy Representation. Legislative Studies Quarterly 41(1):181–215. D. John Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating Gender on Twitter. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. EMNLP, pages 1301–1309. Jordan Carpenter, Daniel Preot¸iuc-Pietro, Lucie Flekova, Salvatore Giorgi, Courtney Hagan, Margaret Kern, Anneke Buffone, Lyle Ungar, and Martin Seligman. 2016. Real Men don’t say ’Cute’: Using Automatic Language Analysis to Isolate Inaccurate Aspects of Stereotypes. Social Psychological and Personality Science . Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you Tweet: A Content-Based Approach to Geo-Locating Twitter Users. In Proceedings of the 19th ACM Conference on Information and Knowledge Management. CIKM, pages 759–768. Raviv Cohen and Derek Ruths. 2013. Classifying Political Orientation on Twitter: It’s Not Easy! In Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media. ICWSM, pages 91–99. Michael D Conover, Bruno Gonc¸alves, Jacob Ratkiewicz, Alessandro Flammini, and Filippo Menczer. 2011. Predicting the Political Alignment of Twitter Users. In IEEE Third International Conference on Privacy, Security, Risk and Trust (PASSAT) and the IEEE Third Inernational Conference on Social Computing (SocialCom). pages 192–199. Philip E Converse. 1964. The Nature of Belief Systems in Mass Publics. In David Apter, editor, Ideology and Discontent, Free Press, New York. Hal Daum´e III. 2007. Frustratingly Easy Domain Adaptation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. ACL, pages 256–263. 737 Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science 41(6):391–407. Stephen J Dollinger. 2007. Creativity and Conservatism. Personality and Individual Differences 43(5):1025–1035. Paul Ekman. 1992. An Argument for Basic Emotions. Cognition & Emotion 6(3-4):169–200. Christopher Ellis and James A Stimson. 2012. Ideology in America. Cambridge University Press. Morris P Fiorina. 1999. Extreme Voices: A Dark Side of Civic Engagement. In Morris P. Fiorina and Theda Skocpol, editors, Civic engagement in American democracy, Washington, DC: Brookings Institution Press, pages 405–413. Lucie Flekova, Jordan Carpenter, Salvatore Giorgi, Lyle Ungar, and Daniel Preot¸iuc-Pietro. 2016a. Analyzing Biases in Human Perception of User Age and Gender from Text. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. ACL, pages 843–854. Lucie Flekova, Lyle Ungar, and Daniel PreoctiucPietro. 2016b. Exploring Stylistic Variation with Age and Income on Twitter. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. ACL, pages 313–319. Andrew Gelman. 2009. Red State, Blue State, Rich State, Poor State: Why Americans Vote the Way they Do. Princeton University Press. Alan S Gerber, Gregory A Huber, David Doherty, Conor M Dowling, and Shang E Ha. 2010. Personality and Political Attitudes: Relationships across Issue Domains and Political Contexts. American Political Science Review 104(01):111–133. Jeffrey A Hall, Natalie Pennington, and Allyn Lueders. 2014. Impression Management and Formation on Facebook: A Lens Model Approach. New Media & Society 16(6):958–982. Z. Harris. 1954. Distributional Structure. Word 10(23):146 – 162. Eitan D Hersh. 2015. Hacking the Electorate: How Campaigns Perceive Voters. Cambridge University Press. Mohit Iyyer, Peter Enns, Jordan Boyd-Graber, and Philip Resnik. 2014. Political Ideology Detection using Recursive Neural Networks. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. ACL, pages 1113–1122. Cindy D Kam, Jennifer R Wilking, and Elizabeth J Zechmeister. 2007. Beyond the Narrow Data base: Another Convenience Sample for Experimental Research. Political Behavior 29(4):415–440. Michal Kosinski, David Stillwell, and Thore Graepel. 2013. Private Traits and Attributes are Predictable from Digital Records of Human Behavior. PNAS 110(15):5802–5805. George Lakoff. 1997. Moral Politics: What Conservatives Know that Liberals Don’t. University of Chicago Press. Vasileios Lampos, Nikolaos Aletras, Daniel Preot¸iucPietro, and Trevor Cohn. 2014. Predicting and Characterising User Impact on Twitter. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. EACL, pages 405–413. Randall A Lewis and David H Reiley. 2014. Online Ads and Offline Sales: Measuring the Effect of Retail Advertising via a Controlled Experiment on Yahoo! Quantitative Marketing and Economics 12(3):235–266. Leqi Liu, Daniel Preot¸iuc-Pietro, Zahra Riahi Samani, Mohsen E. Moghaddam, and Lyle Ungar. 2016a. Analyzing Personality through Social Media Profile Picture Choice. In Proceedings of the Tenth International AAAI Conference on Weblogs and Social Media. ICWSM, pages 211–220. Ye Liu, Liqiang Nie, Lei Han, Luming Zhang, and David S Rosenblum. 2015. Action2Activity: Recognizing Complex Activities from Sensor Data. In Proceedings of the International Joint Conference on Artificial Intelligence. IJCAI, pages 1617–1623. Ye Liu, Liqiang Nie, Li Liu, and David S Rosenblum. 2016b. From Action to Activity: Sensor-based Activity Recognition. Neurocomputing 181:108–115. Ye Liu, Luming Zhang, Liqiang Nie, Yan Yan, and David S Rosenblum. 2016c. Fortune Teller: Predicting your Career Path. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, pages 201–207. Ye Liu, Yu Zheng, Yuxuan Liang, Shuming Liu, and David S. Rosenblum. 2016d. Urban Water Quality Prediction Based on Multi-task Multi-view Learning. In Proceedings of the International Joint Conference on Artificial Intelligence. IJCAI, pages 2576–2582. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems. NIPS, pages 3111–3119. Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. 2013b. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of the 2010 annual Conference of the North American Chapter of the Association for Computational Linguistics. NAACL, pages 746–751. 738 Saif M. Mohammad and Peter D. Turney. 2010. Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon. In Proceedings of the Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. NAACL, pages 26–34. Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a Word-Emotion Association Lexicon. Computational Intelligence 29(3):436–465. Burt L Monroe, Michael P Colaresi, and Kevin M Quinn. 2008. Fightin’ Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict. Political Analysis 16(4):372–403. Jaime L Napier and John T Jost. 2008. Why are Conservatives Happier than Liberals? Psychological Science 19(6):565–572. Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine Learning in Python. JMLR 12. Marco Pennacchiotti and Ana-Maria Popescu. 2011. A Machine Learning Approach to Twitter User Classification. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media. ICWSM, pages 281–288. James W. Pennebaker, Martha E. Francis, and Roger J. Booth. 2001. Linguistic Inquiry and Word Count. Mahway: Lawrence Erlbaum Associates. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. EMNLP, pages 1532–1543. Bryan Perozzi and Steven Skiena. 2015. Exact Age Prediction in Social Networks. In Proceedings of the 24th International Conference on World Wide Web. WWW, pages 91–92. Daniel Preot¸iuc-Pietro, Jordan Carpenter, Salvatore Giorgi, and Lyle Ungar. 2016. Studying the Dark Triad of Personality using Twitter Behavior. In Proceedings of the 25th ACM Conference on Information and Knowledge Management. CIKM, pages 761–770. Daniel Preot¸iuc-Pietro, Vasileios Lampos, and Nikolaos Aletras. 2015a. An Analysis of the User Occupational Class through Twitter Content. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. ACL, pages 1754–1764. Daniel Preot¸iuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach, and Nikolaos Aletras. 2015b. Studying User Income through Language, Behaviour and Affect in Social Media. PLoS ONE . Anna Priante, Djoerd Hiemstra, Tijs van den Broek, Aaqib Saeed, Michel Ehrenhard, and Ariana Need. 2016. #WhoAmI in 160 Characters? Classifying Social Identities Based on Twitter. In Proceedings of the Workshop on Natural Language Processing and Computational Social Science. EMNLP, pages 55–65. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying Latent User Attributes in Twitter. In Proceedings of the 2nd International Workshop on Search and Mining Usergenerated Contents. SMUC, pages 37–44. Dominic Rout, Daniel Preot¸iuc-Pietro, Bontcheva Kalina, and Trevor Cohn. 2013. Where’s @wally: A Classification Approach to Geolocating Users based on their Social Ties. In Proceedings of the 24th ACM Conference on Hypertext and Social Media. HT, pages 11–20. Maarten Sap, Gregory Park, Johannes C. Eichstaedt, Margaret L. Kern, David J. Stillwell, Michal Kosinski, Lyle H. Ungar, and Hansen Andrew Schwartz. 2014. Developing Age and Gender Predictive Lexica over Social Media. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. EMNLP, pages 1146–1151. H Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, Achal Shah, Michal Kosinski, David Stillwell, and Martin EP Seligman. 2013. Personality, Gender, and Age in the Language of Social Media: The Open-vocabulary Approach. PloS ONE 8(9). Jianbo Shi and Jitendra Malik. 2000. Normalized Cuts and Image Segmentation. Transactions on Pattern Analysis and Machine Intelligence 22(8):888–905. Carlo Strapparava and Rada Mihalcea. 2008. Learning to Identify Emotions in Text. In Proceedings of the 2008 ACM Symposium on Applied Computing. pages 1556–1560. Carlo Strapparava, Alessandro Valitutti, et al. 2004. WordNet Affect: an Affective Extension of WordNet. In Proceedings of the Fourth International Conference on Language Resources and Evaluation. volume 4 of LREC, pages 1083–1086. Ramanathan Subramanian, Yan Yan, Jacopo Staiano, Oswald Lanz, and Nicu Sebe. 2013. On the Relationship between Head Pose, Social Attention and Personality Prediction for Unstructured and Dynamic Group Interactions. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction. ICMI, pages 3–10. Karolina Sylwester and Matthew Purver. 2015. Twitter Language Use Reflects Psychological Differences between Democrats and Republicans. PLoS ONE 10(9). 739 Brandon Van Der Heide, Jonathan D D’Angelo, and Erin M Schumaker. 2012. The Effects of Verbal versus Photographic Self-presentation on Impression Formation in Facebook. Journal of Communication 62(1):98–116. Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring User Political Preferences from Streaming Communications. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. ACL, pages 186–196. Ulrike von Luxburg. 2007. A Tutorial on Spectral Clustering. Statistics and Computing 17(4):395– 416. Pengfei Wang, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2016. Your Cart tells You: Inferring Demographic Attributes from Purchase Data. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining. WSDM, pages 173–182. Ingmar Weber, Venkata Rama Kiran Garimella, and Asmelash Teka. 2013. Political Hashtag Trends. In European Conference on Information Retrieval. ECIR, pages 857–860. Tae Yano, Philip Resnik, and Noah A Smith. 2010. Shedding (a Thousand Points of) Light on Biased Language. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. NAACL, pages 152–158. Muhammad Bilal Zafar, Krishna P Gummadi, and Cristian Danescu-Niculescu-Mizil. 2016. Message Impartiality in Social Media Discussions. In Proceedings of the Tenth International AAAI Conference on Weblogs and Social Media. ICWSM, pages 466–475. Faiyaz Al Zamal, Wendy Liu, and Derek Ruths. 2012. Homophily and Latent Attribute Inference: Inferring Latent Attributes of Twitter Users from Neighbors. In Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media. ICWSM, pages 387–390. Jiayu Zhou, Jianhui Chen, and Jieping Ye. 2011. MALSAR: Multi-Task Learning via Structural Regularization. Arizona State University . Hui Zou and Trevor Hastie. 2005. Regularization and Variable Selection via the Elastic Net. Journal of the Royal Statistical Society, Series B . 740
2017
68
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 741–752 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1069 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 741–752 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1069 Leveraging Behavioral and Social Information for Weakly Supervised Collective Classification of Political Discourse on Twitter Kristen Johnson, Di Jin, Dan Goldwasser Department of Computer Science Purdue University, West Lafayette, IN 47907 {john1187, jind, dgoldwas}@purdue.edu Abstract Framing is a political strategy in which politicians carefully word their statements in order to control public perception of issues. Previous works exploring political framing typically analyze frame usage in longer texts, such as congressional speeches. We present a collection of weakly supervised models which harness collective classification to predict the frames used in political discourse on the microblogging platform, Twitter. Our global probabilistic models show that by combining both lexical features of tweets and network-based behavioral features of Twitter, we are able to increase the average, unsupervised F1 score by 21.52 points over a lexical baseline alone. 1 Introduction The importance of understanding political discourse on social media platforms is becoming increasingly clear. In recent U.S. presidential elections, Twitter was widely used by all candidates to promote their agenda, interact with supporters, and attack their opponents. Social interactions on such platforms allow politicians to quickly react to current events and gauge interest in and support for their actions. These dynamic settings emphasize the importance of constructing automated tools for analyzing this content. However, these same dynamics make constructing such tools difficult, as the language used to discuss new events and political agendas continuously changes. Consequently, the rich social interactions on Twitter can be leveraged to help support such analysis by providing alternatives to direct supervision. In this paper we focus on political framing, a very nuanced political discourse analysis task, on a variety of issues frequently discussed on Twitter. Framing (Entman, 1993; Chong and Druckman, 2007) is employed by politicians to bias the discussion towards their stance by emphasizing specific aspects of the issue. For example, the debate around increasing the minimum wage can be framed as a quality of life issue or as an economic issue. While the first frame supports increasing minimum wage because it improves workers’ lives, the second frame, by conversely emphasizing the costs involved, opposes the increase. Using framing to analyze political discourse has gathered significant interest over the last few years (Tsur et al., 2015; Card et al., 2015; Baumer et al., 2015) as a way to automatically analyze political discourse in congressional speeches and political news articles. Different from previous works which focus on these longer texts or single issues, our dataset includes tweets authored by all members of the U.S. Congress from both parties, dealing with several policy issues (e.g., immigration, ACA, etc.). These tweets were annotated by adapting the annotation guidelines developed by Boydstun et al. (2014) for Twitter. Twitter issue framing is a challenging multilabel prediction task. Each tweet can be labeled as using one or more frames, out of 17 possibilities, while only providing 140 characters as input to the classifier. The main contribution of this work is to evaluate whether the social and behavioral information available on Twitter is sufficient for constructing a reliable classifier for this task. We approach this framing prediction task using a weakly supervised collective classification approach which leverages the dependencies between tweet frame predictions based on the interactions between their authors. These dependencies are modeled by connecting Twitter users who have social connections or behavioral similarities. Social connections are di741 rected dependencies that represent the followers of each user as well as retweeting behavior (i.e., user A retweets user B’s content). Interestingly, such social connections capture the flow of influence within political parties; however, the number of connections that cross party lines is extremely low. Instead, we rely on capturing behavioral similarity between users to provide this information. For example, users whose Twitter activity peaks at similar times tend to discuss issues in similar ways, providing indicators of their frame usage for those issues. In addition to using social and behavioral information, our approach also incorporates each politician’s party affiliation and the frequent phrases (e.g., bigrams and trigrams) used by politicians on Twitter. These lexical, social, and behavioral features are extracted from tweets via weakly supervised models and then declaratively compiled into a graphical model using Probabilistic Soft Logic (PSL), a recently introduced probabilistic modeling framework.1 As described in Section 4, PSL specifies high level rules over a relational representation of these features. These rules are then compiled into a graphical model called a hingeloss Markov random field (Bach et al., 2013), which is used to make the frame prediction. Instead of direct supervision we take a bootstrapping approach by providing a small seed set of keywords adapted from Boydstun et al. (2014), for each frame. Our experiments show that modeling social and behavioral connections improves F1 prediction scores in both supervised and unsupervised settings, with double the increase in the latter. We apply our unsupervised model to our entire dataset of tweets to analyze framing patterns over time by both party and individual politicians. Our analysis provides insight into the usage of framing for identification of aisle-crossing politicians, i.e., those politicians who vote against their party. 2 Related Work Issue framing is related to the broader challenges of biased language analysis (Recasens et al., 2013; Choi et al., 2012; Greene and Resnik, 2009) and subjectivity (Wiebe et al., 2004). Several previous works have explored framing in public statements, congressional speeches, and news articles (Fulgoni et al., 2016; Tsur et al., 2015; Card 1http://psl.cs.umd.edu et al., 2015; Baumer et al., 2015). Our approach builds upon the previous work on frame analysis of Boydstun et al. (2014), by adapting and applying their annotation guidelines for Twitter. In recent years there has been growing interest in analyzing political discourse. Most previous work focuses on opinion mining and stance prediction (Sridhar et al., 2015; Hasan and Ng, 2014; Abu-Jbara et al., 2013; Walker et al., 2012; Abbott et al., 2011; Somasundaran and Wiebe, 2010, 2009). Analyzing political tweets has also attracted considerable interest: a recent SemEval task looked into stance prediction,2 and more related to our work, Tan et al. (2014) have shown how wording choices can affect message propagation on Twitter. Two recent works look into predicting stance (at user and tweet levels respectively) on Twitter using PSL (Johnson and Goldwasser, 2016; Ebrahimi et al., 2016). Frame classification, however, has a finer granularity than stance classification and describes how someone expresses their view on an issue, not whether they support the issue. Other works focus on identifying and measuring political ideologies (Iyyer et al., 2014; Bamman and Smith, 2015; Sim et al., 2013), policies (Nguyen et al., 2015), and voting patterns (Gerrish and Blei, 2012). Exploiting social interactions and group structure for prediction has also been explored (Sridhar et al., 2015; Abu-Jbara et al., 2013; West et al., 2014). Works focusing on inferring signed social networks (West et al., 2014), stance classification (Sridhar et al., 2015), social group modeling (Huang et al., 2012), and collective classification using PSL (Bach et al., 2015) are closest to our approach. Unsupervised and weakly supervised models of Twitter data for several various tasks have been suggested, including: profile (Li et al., 2014b) and life event extraction (Li et al., 2014a), conversation modeling (Ritter et al., 2010), and methods for dealing with the unique language used in microblogs (Eisenstein, 2013). Predicting political affiliation and other characteristics of Twitter users has been explored (Volkova et al., 2015, 2014; Yano et al., 2013; Conover et al., 2011). Others have focused on sentiment analysis (Pla and Hurtado, 2014; Bakliwal et al., 2013), predicting ideology (Djemili et al., 2014), automatic polls 2http://alt.qcri.org/semeval2016/ task6/ 742 based on Twitter sentiment and political forecasting using Twitter (Bermingham and Smeaton, 2011; O’Connor et al., 2010; Tumasjan et al., 2010), as well as distant supervision applications (Marchetti-Bowick and Chambers, 2012). Several works from political and social science research have studied the role of Twitter and framing in shaping public opinion of certain events, e.g. the Vancouver riots (Burch et al., 2015) and the Egyptian protests (Harlow and Johnson, 2011; Meraz and Papacharissi, 2013). Others have covered framing and sentiment analysis of opponents (Groshek and Al-Rawi, 2013) and network agenda modeling (Vargo et al., 2014) in the 2012 U.S. presidential election. Jang and Hart (2015) studied frames used by the general population specific to global warming. In contrast to these works, we predict the issue-independent general frames of tweets, by U.S. politicians, which discuss six different policy issues. 3 Data Collection and Annotation Data Collection and Preprocessing: We collected 184,914 of the most recent tweets of members of the U.S. Congress (both the House of Representatives and Senate). Using an average of ten keywords per issue, we filtered out tweets not related to the following six issues of interest: (1) limiting or gaining access to abortion, (2) debates concerning the Affordable Care Act (i.e., ACA or Obamacare), (3) the issue of gun rights versus gun control, (4) effects of immigration policies, (5) acts of terrorism, and (6) issues concerning the LGBTQ community. Forty politicians (10 Republicans and 10 Democrats, from both the House and Senate), were chosen randomly for annotation. Table 1 presents the statistics of our congressional tweets dataset, which is available for the community.3 Appendix A contains more details of our dataset and preprocessing steps. Data Annotation: Two graduate students were trained in the use of the Policy Frames Codebook developed by Boydstun et al. (2014) for annotating each tweet with a frame. The general aspects of each frame are shown in Table 2. Frames are designed to generalize across issues and overlap of multiple frames is possible. Additionally, the Codebook is typically applied to newspaper ar3The dataset and PSL scripts are available at: http://purduenlp.cs.purdue.edu/projects/ twitterframing. ticles where discussion of policy can encompass other frames in the text. Consequently, annotators using the Codebook are advised to be careful when assigning Frame 13 to a text. Based on this guidance and the difficulty of labeling tweets (as discussed in Card et al. (2015)), annotators were instructed to use the following procedure: (1) attempt to assign a primary frame to the tweet if possible, (2) if not possible, assign two frames to the tweet where the first frame is chosen as the more accurate of the two frames, (3) when assigning frames 12 through 17, double check that the tweet cannot be assigned to any other frames. Annotators spent one month labeling the randomly chosen tweets. For all tweets with more than one frame, annotators met to come to a consensus on whether the tweet should have one frame or both. The labeled dataset has an inter-annotator agreement, calculated using Cohen’s Kappa statistic, of 73.4%. Extensions of the Codebook for Twitter Use: The first 14 frames outlined in Table 2 are directly applicable to the tweets of U.S. politicians. In our labeled set, Frame 15 (Other) was never used. Therefore, we drop its analysis from this paper. From our observations, we propose the addition of the 3 frames at the bottom of Table 2 for Twitter analysis: Factual, (Self) Promotion, and Personal Sympathy and Support. Tweets that present a fact, with no detectable political spin or twists, are labeled as having the Factual frame (15). Tweets that discuss a politician’s appearances, speeches, statements, or refer to political friends are considered to have the (Self) Promotion frame. Finally, tweets where a politician offers their “thoughts and prayers”, condolences, or stands in support of others, are considered to have the Personal frame. We find that for many tweets, one frame is not enough. This is caused by the compound nature of many tweets, e.g., some tweets are two separate sentences, with each sentence having a different frame or tweets begin with one frame and end with another. A final problem, that may also be relevant to longer text articles, is that of subframes within a larger frame. For example, the tweet “We must bolster the security of our borders and craft an immigration policy that grows our economy.” has two frames: Security & Defense and Economic. However, both frames could fall under Frame 13 (Policy), if this tweet as a whole was a rebuttal point about an immigration policy. The lack of 743 Tweets BY PARTY BY ISSUE REP DEM ABORTION ACA GUNS IMMIGRATION TERRORISM LGBTQ ENTIRE DATASET 48504 43953 6467 35854 15532 13442 15205 6046 LABELED SUBSET 894 1156 170 564 543 233 446 183 Table 1: Statistics of Collected Tweets. REP stands for Republican and DEM for Democrats. FRAME NUMBER, FRAME NAME, AND BRIEF DESCRIPTION OF FRAME 1. ECONOMIC: Pertains to the economic impacts of a policy 2. CAPACITY & RESOURCES: Pertains to lack of or availability of resources 3. MORALITY & ETHICS: Motivated by religious doctrine, righteousness, sense of responsibility 4. FAIRNESS & EQUALITY: Of how laws, punishments, resources, etc. are distributed among groups 5. LEGALITY, CONSTITUTIONALITY, & JURISDICTION: Including court cases, restriction and expressions of rights 6. CRIME & PUNISHMENT: Policy violation and consequences 7. SECURITY & DEFENSE: Threats or defenses/preemptive actions to protect against threats 8. HEALTH & SAFETY: Includes care access and effectiveness 9. QUALITY OF LIFE: Effects on individual and community life 10. CULTURAL IDENTITY: Culture’s norms, trends, customs 11. PUBLIC SENTIMENT: Pertains to opinions, polling, and demographics 12. POLITICAL FACTORS & IMPLICATIONS: Efforts, stances, filibusters, lobbying, references to other politicians 13. POLICY DESCRIPTION, PRESCRIPTION, & EVALUATION: Discusses effectiveness of current or proposed policies 14. EXTERNAL REGULATION AND REPUTATION: Interstate and international relationships of the U.S. 15. FACTUAL: Expresses a pure fact, with no detectable political spin 16. (SELF) PROMOTION: Promotes another person or the author in some way, e.g. television appearances 17. PERSONAL SYMPATHY & SUPPORT: Expresses sympathy, emotional response, or solidarity with others Table 2: Frames and Descriptions. The first 14 are Boydstun’s frames and the last 3 are our proposed Twitter-specific frames. Boydstun’s original Frame 15 (Other) is omitted from this study. available context for short tweets can make it difficult to determine if a tweet should have one primary frame or is more accurately represented by multiple frames. 4 Global Models of Twitter Language and Activity Due to the dynamic nature of political discourse on Twitter, our approach is designed to require as little supervision as possible. We implement 6 weakly supervised models which are datadependent and used to extract and format information from tweets into input for PSL predicates. These predicates are then combined into the probabilistic rules of each model as shown in Table 3. The only sources of supervision these models require includes: unigrams related to the issues, unigrams adapted from the Boydstun et al. (2014) Codebook for frames, and political party of the author of the tweets. 4.1 Global Modeling Using PSL PSL is a declarative modeling language which can be used to specify weighted, first-order logic rules. These rules are compiled into a hinge-loss Markov random field which defines a probability distribution over possible continuous value assignments to the random variables of the model (Bach et al., 2015).4 This probability density function is represented as: P(Y | X) = 1 Z exp − M X r=1 λrφr(Y , X) ! where Z is a normalization constant, λ is the weight vector, and φr(Y, X) = (max{lr(Y, X), 0})⇢r is the hinge-loss potential specified by a linear function lr. The exponent ⇢r 2 1, 2 is optional. Each potential represents the instantiation of a rule, which takes the following form: λ1 : P1(x) ^ P2(x, y) ! P3(y) λ2 : P1(x) ^ P4(x, y) ! ¬P3(y) P1, P2, P3, and P4 are predicates (e.g., political party, issue, frame, and presence of n-grams) and x, y are variables. Each rule has a weight λ which reflects that rule’s importance and is learned using the Expectation-Maximization algorithm in our unsupervised experiments. Using concrete constants a, b (e.g., tweets and words) which instantiate the variables x, y, model atoms are mapped 4Unlike other probabilistic logical models, e.g. MLNs, in which the model’s random variables are strictly true or false. 744 to continuous [0,1] assignments. More important rules (i.e., those with larger weights) are given preference by the model. 4.2 Language Based Models Unigrams: Using the guidelines provided in the Policy Frames Codebook (Boydstun et al., 2014), we adapted a list of expected unigrams for each frame. For example, unigrams that should be related to Frame 12 (Political Factors & Implications) include: filibuster, lobby, Democrats, Republicans. We expect that if a tweet and frame contain a matching unigram, then that frame is likely present in that tweet. The information that tweet T has expected unigram U of frame F is represented with the PSL predicate: UNIGRAMF (T, U). This knowledge is then used as input to PSL Model 1 via the rule: UNIGRAMF (T, U) !FRAME(T, F) (shown in line 1 of Table 3). However, not every tweet will have a unigram that matches those in this list. Under the intuition that at least one unigram in a tweet should be similar to a unigram in the list, we designed the following MaxSim metric to compute the maximum similarity between a word in a tweet and a word from the list of unigrams. MAXSIM(T, F) = arg max u2F,w2T SIMILARITY(W,U) (1) T is a tweet, W is each word in T, and U is each unigram in the list of expected unigrams (per frame). SIMILARITY is the computed word2vec similarity (using pretrained embeddings) of each word in the tweet with every unigram in the list of unigrams for each frame. The frame F of the maximum scoring unigram is input to the PSL predicate: MAXSIMF (T, F), which indicates that tweet T has the highest similarity to frame F. Bigrams and Trigrams: In addition to unigrams, we also explored the effects of political party slogans on frame prediction. Slogans are common catch phrases or sayings that people typically associate with different U.S. political parties. For example, Republicans are known for using the phrase “repeal and replace” when they discuss the ACA. Similarly, in the 2016 U.S. presidential election, Secretary Hillary Clinton’s campaign slogan became “Love Trumps Hate”. To visualize slogan usage by parties for different issues, we used the entire tweets dataset, including all unlabeled tweets, to extract the top bigrams 0 2000 4000 6000 8000 10000 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 Frequency of Occurences Bigrams Democrat Top 100 Bigrams Abortion ACA Guns Immigration Terrorism LGBTQ (a) Democrat Bigrams 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 Frequency of Occurences Bigrams Republican Top 100 Bigrams Abortion ACA Guns Immigration Terrorism LGBTQ (b) Republican Bigrams 0 500 1000 1500 2000 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 Frequency of Occurences Trigrams Democrat Top 100 Trigrams Abortion ACA Guns Immigration Terrorism LGBTQ (c) Democrat Trigrams 0 200 400 600 800 1000 1200 1400 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 Frequency of Occurences Trigrams Republican Top 100 Trigrams Abortion ACA Guns Immigration Terrorism LGBTQ (d) Republican Trigrams Figure 1: Distributions of Bigrams and Trigrams by Party. and trigrams per party for each issue. The histograms in Figure 1 show these distributions for the top 100 bigrams and trigrams. Based on these results, we use the top 20 bigrams (e.g., women’s healthcare and immigration reform) and trigrams (e.g. prevent gun violence) as input to PSL predicates BIGRAMIP (T, B) and TRIGRAMIP (T, TG). These rules represent that tweet T has bigram B or trigram TG from the respective issue I phrase lists of either party P. 4.3 Twitter Behavior Based Models In addition to language based features of tweets, we also exploit the behavioral and social features of Twitter including similarities between temporal activity and network relationships. Temporal Similarity: We construct a temporal histogram for each politician which captures their Twitter activity over time. When an event happens politicians are most likely to tweet about that event within hours of its occurrence. Similarly, most politicians tweet about the event most frequently the day of the event and this frequency decreases over time. From these temporal histograms, we observed that the frames used the day of an event were similar and gradually changed over time. For example, once the public is notified of a shooting, politicians respond with Frame 17 to offer sympathy to the victims and their families. Over the next days or weeks, both parties slowly transition to using additional frames, e.g. Democrats use Frame 7 to argue for gun control legislation. To capture this behavior we use the PSL predicate SAMETIME(T1, T2). This indicates that tweet T1 occurs around the same time as tweet 745 TYPES OF MODELS MODEL NUMBER BASIS OF MODEL EXAMPLE OF PSL RULES LANGUAGE BASED 1 Unigrams UNIGRAMF (T, U) !FRAME(T, F) 2 Bigrams UNIGRAMF (T, U) ^BIGRAMI P (T, B) !FRAME(T, F) 3 Trigrams UNIGRAMF (T, U) ^TRIGRAMI P (T, TG) !FRAME(T, F) BEHAVIOR BASED 4 Temporal Activity SAMETIME(T1, T2) ^FRAME(T1, F) !FRAME(T2, F) 5 Retweet Patterns RETWEETS(T1, T2) ^FRAME(T1, F) !FRAME(T2, F) 6 Following Network FOLLOWS(T1, T2) ^FRAME(T1, F) !FRAME(T2, F) Table 3: Examples of PSL Model Rules. Each model adds to the rules of the previous model. The full list of rule combinations for each model is available with our dataset. T2.5 This information is used in Model 4 via rules such as: SAMETIME(T1, T2) & FRAME(T1, F) !FRAME(T2, F), as shown in line 4 of Table 3. Network Similarity: Finally, we expect that politicians who share ideologies, and thus are likely to frame issues similarly, will retweet and/or follow each other on Twitter. Due to the compound nature of tweets, retweeting with additional comments can add more frames to the original tweet. Additionally, politicians on Twitter are more likely to follow members of their own party or similar non-political entities than those of the opposing party. To capture this network-based behavior we use two PSL predicates: RETWEETS(T1, T2) and FOLLOWS(T1, T2). These predicates indicate that the content of tweet T1 includes a retweet of tweet T2 and that the author of T1 follows the author of T2 on Twitter, respectively. The last two lines of Table 3 show examples of how network similarity is incorporated into PSL rules. 5 Experiments Evaluation Metrics: Since each tweet can have more than one frame, our prediction task is a multilabel classification task. The precision of a multilabel model is the ratio of how many predicted labels are correct: Precision = 1 T T X t=1 |Yt \ h(xt)| |h(xt)| (2) The recall of this model is the ratio of how many of the actual labels were predicted: Recall = 1 T T X t=1 |Yt \ h(xt)| |Yt| (3) 5We conducted experiments with different hour and day limits and found that using a time frame of one hour results in the best accuracy while limiting noise. In both formulas, T is the number of tweets, Yt is the true label for tweet t, xt is a tweet example, and h(xt) are the predicted labels for that tweet. The F1 score is computed as the harmonic mean of the precision and recall. Additionally, in Tables 4, 5, and 6 the reported average is the micro-weighted average F1 scores over all frames. Experimental Settings: We provide an analysis of our PSL models under both supervised and unsupervised settings. In the PSL supervised experiments, we used five-fold cross validation with randomly chosen splits. Previous works typically use an SVM, with bagof-words features, which is not used in a multilabel prediction, i.e., each frame is predicted individually. The results of this approach on our dataset are shown in column 2 of Table 4. In this scenario, the SVM tends to prefer the majority class, which results in many incorrect labels. Column 3 shows the results of using an SVM with bag-of-words features to perform multilabel classification. This approach decreases the F1 score for a majority of frames. Both SVMs also result in F1 scores of 0 for some frames, further lowering the overall performance. Finally, columns 4 and 5 show the results of using our worst and best PSL models, respectively. PSL Model 1, which uses our adapted unigram features instead of the bag-of-words features for multilabel classification, serves as our baseline to improve upon. Additionally, Model 6 of the supervised, collective network setting represents the best results we can achieve. We also explore the results of our PSL models in an unsupervised setting because the highly dynamic nature of political discourse on Twitter makes it unrealistic to expect annotated data to generalize to future discussions. The only source of supervision comes from the initial unigrams lists and party information as described in Section 4. The labeled tweets are used for evaluation only. As seen in Table 4, we are able to improve 746 SETTING SVM INDIV. SVM MULTI. PSL M1 PSL M6 SUP. 28.67 18.90 66.02 77.79 UNSUP. —– —– 37.14 58.66 Table 4: Baseline and Skyline Micro-weighted Average F1 Scores. SVM INDIV. is the SVM trained to predict one frame. SVM MULTI. is the multiclass SVM. PSL M1 is the adapted unigram PSL Model 1. PSL M6 is the collective network. the best unsupervised model to within an F1 score of 7.36 points of the unigram baseline of 66.02, and 19.13 points of the best supervised score of 77.79. Analysis of Supervised Experiments: Table 5 shows the results of our supervised experiments. Here we can see that by adding Twitter behavior (beginning with Model 4), our behaviorbased models achieve the best F1 scores across all frames. Model 4 achieves the highest results on two frames, suggesting retweeting and network follower information do not help improve the prediction score for these frames. Similarly, Model 5 achieves the highest prediction for 5 of the frames, suggesting network follower information cannot further improve the score for these frames. Overall, the Twitter behavior based models are able to outperform language based models alone, including the best performing language model (Model 3) which combines unigrams, bigrams, and trigrams together to collectively infer the correct frames. Analysis of Unsupervised Experiments: In the unsupervised setting, Model 6, the combination of language and Twitter behavior features achieves the best results on 16 of the 17 issues, as shown in Table 6. There are a few interesting aspects of the unsupervised setting which differ from the supervised setting. Six of the frame predictions do worse in Model 2, which is double that of the supervised version. This is likely due to the presence of overlapping bigrams across frames and issues, e.g., “women’s healthcare” could appear in both Frames 4 and 8 and the issues of ACA and abortion. However, all six are able to improve with the addition of trigrams (Model 3), whereas only 1 of 3 frames improves in the supervised setting. This suggests that bigrams may not be as useful as trigrams in an unsupervised setting. Finally, in Model 5, which adds retweet behaviors, we notice that 5 of the frames decrease in F1 score and 11 0 500 1000 1500 2000 2500 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Number of Tweets Frames Democrat ACA Tweets 2014 2015 2016 (a) Democrat ACA Frames 0 100 200 300 400 500 600 700 800 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Number of Tweets Frames Democrat Terrorism Tweets 2014 2015 2016 (b) Dem. Terrorism Frames 0 200 400 600 800 1000 1200 1400 1600 1800 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Number of Tweets Frames Republican ACA Tweets 2014 2015 2016 (c) Republican ACA Frames 0 200 400 600 800 1000 1200 1400 1600 1800 2000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Number of Tweets Frames Republican Terrorism Tweets 2014 2015 2016 (d) Rep. Terrorism Frames Figure 2: Predicted Frames for Tweets from 2014 to 2016 by Party for ACA and Terrorism Issues. of the frames have the same score as the previous model. These results suggest that retweet behaviors are not as useful as the follower network relationships in an unsupervised setting. 6 Qualitative Analysis To explore the usefulness of frame identification in political discourse analysis, we apply our best performing model (Model 6) on the unlabeled dataset to determine framing patterns over time, both by party and individual. Figure 2 shows the results of our frame analysis for both parties over time for two issues: ACA and terrorism.6 We compiled the predicted frames for tweets from 2014 to 2016 for each party. Figure 3 presents the results of frame prediction for 2015 tweets of aisle-crossing individual politicians for these two issues. Party Frames: From Figure 2(a) we can see that Democrats mainly use Frames 1, 4, 8, 9, and 15 to discuss ACA, while Figure 2(c) shows that Republicans predominantly use Frames 1, 8, 9, 12, and 13. Though the parties use similar frames, they are used to express different agendas. For example, Democrats use Frame 8 to indicate the positive effect that the ACA has had in granting more Americans health care access. Republicans, however, use Frame 8 (and Frame 13) to indicate their party’s agenda to replace the ACA with access to different options for health care. Additionally, Democrats use the Fairness & Equality Frame (Frame 4) to convey that the ACA gives minority groups a better chance at accessing health care. 6Due to space, we omit the other 4 issues. These 2 were chosen because they are among the most frequently discussed issues in our dataset. 747 Frame Number Frame RESULTS OF SUPERVISED PSL MODEL FRAME PREDICTIONS MODEL 1 MODEL 2 MODEL 3 MODEL 4 MODEL 5 MODEL 6 1 ECONOMIC 85.19 85.19 86.73 87.72 87.72 89.88 2 CAPACITY & RESOURCES 55.38 61.54 76.71 77.11 77.11 79.55 3 MORALITY 73.39 80.52 86.95 87.5 87.43 87.43 4 FAIRNESS 63.56 67.83 65.19 69.91 79.53 82.35 5 LEGALITY 80.41 80.78 80.79 83.33 81.79 82.16 6 CRIME 54.55 54.55 66.67 76.92 76.92 76.92 7 SECURITY 84.40 82.14 84.10 86.67 86.67 88.48 8 HEALTH 73.50 75.76 75.59 77.46 79.71 79.71 9 QUALITY OF LIFE 69.39 68.00 69.39 72.34 72.34 82.93 10 CULTURAL 75.86 78.57 81.25 81.25 81.25 85.71 11 PUBLIC SENTIMENT 12.25 15.25 24.62 24.24 26.24 29.41 12 POLITICAL 54.21 63.31 74.33 74.42 74.52 74.52 13 POLICY 55.75 58.87 60.25 61.54 64.06 65.06 14 EXTERNAL REGULATION 60.71 59.15 64.71 74.35 74.35 85.71 15 FACTUAL 66.56 68.00 71.43 81.82 80.82 82.85 16 (SELF) PROMOTION 85.71 86.46 86.58 87.34 87.33 91.76 17 PERSONAL 71.79 71.71 74.73 75.00 77.55 77.55 WEIGHTED AVERAGE 66.02 68.78 72.49 74.40 75.71 77.79 Table 5: F1 Scores of Supervised PSL Models. The highest prediction per frame is marked in bold. Frame Number Frame RESULTS OF UNSUPERVISED PSL MODEL FRAME PREDICTIONS MODEL 1 MODEL 2 MODEL 3 MODEL 4 MODEL 5 MODEL 6 1 ECONOMIC 31.82 31.52 69.57 72.22 72.22 73.23 2 CAPACITY & RESOURCES 23.38 28.51 40.00 41.18 41.18 41.18 3 MORALITY 28.63 29.41 47.67 53.98 43.06 53.99 4 FAIRNESS 33.49 47.19 59.15 63.50 63.50 64.74 5 LEGALITY 44.58 46.93 58.02 60.64 60.63 64.54 6 CRIME 7.89 7.62 73.33 75.00 75.00 76.92 7 SECURITY 42.50 40.24 51.83 62.09 61.68 64.09 8 HEALTH 48.36 48.79 79.43 86.49 86.49 86.67 9 QUALITY OF LIFE 17.82 21.99 48.89 52.63 52.63 54.35 10 CULTURAL 15.38 15.67 51.22 52.63 52.63 55.56 11 PUBLIC SENTIMENT 15.22 15.72 50.79 53.97 41.03 54.69 12 POLITICAL 49.06 48.20 50.29 46.99 46.99 47.23 13 POLICY 39.88 39.39 37.02 42.77 42.77 43.79 14 EXTERNAL REGULATION 12.66 14.22 44.44 66.67 66.67 71.43 15 FACTUAL 24.64 19.21 70.95 70.37 70.41 78.95 16 (SELF) PROMOTION 40.11 46.41 48.16 50.96 50.96 52.89 17 PERSONAL 45.36 46.15 59.66 62.99 62.13 71.20 WEIGHTED AVERAGE 37.14 38.79 53.13 56.49 55.54 58.66 Table 6: F1 Scores of Unsupervised PSL Models. The highest prediction per frame is marked in bold. They also use Frame 15 to express statistics about enrollment of Americans under the ACA. Finally, Republicans use Frames 12 and 13 to bring attention to their own party’s actions to “repeal and replace” the ACA with different policies. Figures 2(b) and 2(d) show the party-based framing patterns over time for terrorism related tweets. For this issue both parties use similar frames: 3, 7, 10, 14, 16, and 17, but to express different views. For example, Democrats use Frame 3 to indicate a moral responsibility to fight ISIS. Republicans use Frame 3 to frame terrorists or their attacks as a result of “radical Islam”. An interesting pattern to note is seen in Frames 10 and 14 for both parties. In 2015 there is a large increase in the usage of this frame. This seems to indicate that parties possibly adopt new frames simultaneously or in response to the opposing party, perhaps in an effort to be in control of the way the message is delivered through that frame. Individual Frames: In addition to entire party analysis, we were interested in seeing if frames could shed light on the behavior of aisle-crossing politicians. These are politicians who do not vote the same as the majority vote of their party (i.e., they vote the same as the opposing party). Identifying such politicians can be useful in governments which are heavily split by party, i.e., governments such as the recent U.S. Congress (2015 to 2017), where politicians tend to vote the same 748 0 10 20 30 40 50 60 70 80 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Number of Tweets Frames ACA Vote Aisle-Crossing Republicans Dold Buck Meadows Walker Salmon Poliquin Hanna Jones (a) Aisle-Crossing Republicans on ACA Votes. 0 5 10 15 20 25 30 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Number of Tweets Frames Terrorism Vote Aisle-Crossing Democrats Clyburn Carson Lee Watson Coleman Cleaver Moore McDermott Lewis Fudge Kaptur Richmond (b) Aisle-Crossing Democrats on Terrorism Votes. Figure 3: Predicted Frames for Tweets of AisleCrossing Politicians in 2015. as the rest of their party members. For this analysis, we collected five 2015 votes from the House of Representatives on both issues and compiled a list of the politicians who voted opposite to their party. The most important descriptor we noticed was that all aisle-crossing politicians tweet less frequently on the issue than their fellow party members. This is true for both parties. This behavior could indicate lack of desire to draw attention to one’s stance on the particular issue. Figure 3(a) shows the framing patterns of aislecrossing Republicans on ACA votes from 2015. Recall from Figure 2 that Democrats mostly use Frames 1, 4, 8, 9, and 15, while Republicans mainly use Frames 1, 8, and 9. In this example, these Republicans are considered aislecrossing votes because they have voted the same as Democrats on this issue. The most interesting pattern to note here is that these Republicans use the same framing patterns as the Republicans (Frames 1, 8, and 9), but they also use the frames that are unique to Democrats: Frames 4 and 15. These latter two frames appear significantly less in the Republican tweets of our entire dataset as well. These results suggest that to predict aisle-crossing Republicans it would be useful to check for usage of typically Democrat-associated frames, especially if those frames are infrequently used by Republicans. Figure 3(b) shows the predicted frames for aisle-crossing Democrats on terrorism-related votes. We see here that there are very few tweets from these Democrats on this issue and that overall they use the same framing patterns as seen previously: Frames 3, 7, 10, 14, 16, and 17. However, given the small scale of these tweets, we can also consider Frames 12 and 13 to show peaks for this example. This suggests that for aisle-crossing Democrats the use of additional frames not often used by their party for discussing an issue might indicate potentially different voting behaviors. 7 Conclusion In this paper we present the task of collective classification of Twitter data for framing prediction. We show that by incorporating Twitter behaviors such as similar activity times and similar networks, we can increase F1 score prediction. We provide an analysis of our approach in both supervised and unsupervised settings, as well as a real world analysis of framing patterns over time. Finally, our global PSL models can be applied to other domains, such as politics in other countries, simply by changing the initial unigram keywords to reflect the politics of those countries. Acknowledgments We thank the anonymous reviewers for their thoughtful comments and suggestions. References Rob Abbott, Marilyn Walker, Pranav Anand, Jean E. Fox Tree, Robeson Bowmani, and Joseph King. 2011. How can you say such things?!?: Recognizing disagreement in informal political argument. In Proc. of the Workshop on Language in Social Media. Amjad Abu-Jbara, Ben King, Mona Diab, and Dragomir Radev. 2013. Identifying opinion subgroups in arabic online discussions. In Proc. of ACL. Stephen H Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2015. Hinge-loss markov random fields and probabilistic soft logic. arXiv preprint arXiv:1505.04406 . Stephen H. Bach, Bert Huang, Ben London, and Lise Getoor. 2013. Hinge-loss Markov random fields: 749 Convex inference for structured prediction. In Proc. of UAI. Akshat Bakliwal, Jennifer Foster, Jennifer van der Puil, Ron O’Brien, Lamia Tounsi, and Mark Hughes. 2013. Sentiment analysis of political tweets: Towards an accurate classifier. In Proc. of ACL. David Bamman and Noah A Smith. 2015. Open extraction of fine-grained political statements. In Proc. of EMNLP. Eric Baumer, Elisha Elovic, Ying Qin, Francesca Polletta, and Geri Gay. 2015. Testing and comparing computational approaches for identifying the language of framing in political news. In Proc. of NAACL. Adam Bermingham and Alan F Smeaton. 2011. On using twitter to monitor political sentiment and predict election results . Amber Boydstun, Dallas Card, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2014. Tracking the development of media frames within and across policy issues. Lauren M. Burch, Evan L. Frederick, and Ann Pegoraro. 2015. Kissing in the carnage: An examination of framing on twitter during the vancouver riots. Journal of Broadcasting & Electronic Media 59(3):399–415. Dallas Card, Amber E. Boydstun, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2015. The media frames corpus: Annotations of frames across issues. In Proc. of ACL. Eunsol Choi, Chenhao Tan, Lillian Lee, Cristian Danescu-Niculescu-Mizil, and Jennifer Spindel. 2012. Hedge detection as a lens on framing in the gmo debates: A position paper. In Proc. of ACL Workshops. Dennis Chong and James N Druckman. 2007. Framing theory. Annu. Rev. Polit. Sci. 10:103–126. Michael D Conover, Bruno Gonc¸alves, Jacob Ratkiewicz, Alessandro Flammini, and Filippo Menczer. 2011. Predicting the political alignment of twitter users. In Proc. of PASSAT. Sarah Djemili, Julien Longhi, Claudia Marinica, Dimitris Kotzinos, and Georges-Elia Sarfati. 2014. What does twitter have to say about ideology? In NLP 4 CMC. Javid Ebrahimi, Dejing Dou, and Daniel Lowd. 2016. Weakly supervised tweet stance classification by relational bootstrapping. In Proc. of EMNLP. Jacob Eisenstein. 2013. What to do about bad language on the internet. In Proc. of NAACL. Robert M Entman. 1993. Framing: Toward clarification of a fractured paradigm. Journal of communication 43(4):51–58. Dean Fulgoni, Jordan Carpenter, Lyle Ungar, and Daniel Preotiuc-Pietro. 2016. An empirical exploration of moral foundations theory in partisan news sources. In Proc. of LREC. Sean Gerrish and David M Blei. 2012. How they vote: Issue-adjusted models of legislative behavior. In Advances in Neural Information Processing Systems. pages 2753–2761. Stephan Greene and Philip Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. In Proc. of NAACL. Jacob Groshek and Ahmed Al-Rawi. 2013. Public sentiment and critical framing in social media content during the 2012 u.s. presidential campaign. Social Science Computer Review 31(5):563–576. Summer Harlow and Thomas Johnson. 2011. The arab spring— overthrowing the protest paradigm? how the new york times, global voices and twitter covered the egyptian revolution. International Journal of Communication 5(0). Kazi Saidul Hasan and Vincent Ng. 2014. Why are you taking this stance? identifying and classifying reasons in ideological debates. In Proc. of EMNLP. Bert Huang, Stephen H. Bach, Eric Norris, Jay Pujara, and Lise Getoor. 2012. Social group modeling with probabilistic soft logic. In NIPS Workshops. Iyyer, Enns, Boyd-Graber, and Resnik. 2014. Political ideology detection using recursive neural networks. In Proc. of ACL. S. Mo Jang and P. Sol Hart. 2015. Polarized frames on ”climate change” and ”global warming” across countries and states: Evidence from twitter big data. Global Environmental Change 32:11–17. Kristen Johnson and Dan Goldwasser. 2016. All i know about politics is what i read in twitter: Weakly supervised models for extracting politicians’ stances from twitter. In Proc. of COLING. Jiwei Li, Alan Ritter, Claire Cardie, and Eduard H Hovy. 2014a. Major life event extraction from twitter based on congratulations/condolences speech acts. In Proc. of EMNLP. Jiwei Li, Alan Ritter, and Eduard H Hovy. 2014b. Weakly supervised user profile extraction from twitter. In Proc. of ACL. Micol Marchetti-Bowick and Nathanael Chambers. 2012. Learning for microblogs with distant supervision: Political forecasting with twitter. In Proc. of EACL. Sharon Meraz and Zizi Papacharissi. 2013. Networked gatekeeping and networked framing on #egypt. The International Journal of Press/Politics 18(2):138– 166. 750 Viet-An Nguyen, Jordan Boyd-Graber, Philip Resnik, and Kristina Miler. 2015. Tea party in the house: A hierarchical ideal point topic model and its application to republican legislators in the 112th congress. In Proc. of ACL. Brendan O’Connor, Ramnath Balasubramanyan, Bryan R Routledge, and Noah A Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. In Proc. of ICWSM. Ferran Pla and Llu´ıs F Hurtado. 2014. Political tendency identification in twitter using sentiment analysis techniques. In Proc. of COLING. Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for analyzing and detecting biased language. In Proc. of ACL. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of twitter conversations. In Proc. of NAACL. Sim, Acree, Gross, and Smith. 2013. Measuring ideological proportions in political speeches. In Proc. of EMNLP. Swapna Somasundaran and Janyce Wiebe. 2009. Recognizing stances in online debates. In Proc. of ACL. Swapna Somasundaran and Janyce Wiebe. 2010. Recognizing stances in ideological on-line debates. In Proc. of NAACL Workshops. Dhanya Sridhar, James Foulds, Bert Huang, Lise Getoor, and Marilyn Walker. 2015. Joint models of disagreement and stance in online debate. In Proc. of ACL. Chenhao Tan, Lillian Lee, and Bo Pang. 2014. The effect of wording on message propagation: Topicand author-controlled natural experiments on twitter. In Proc. of ACL. Oren Tsur, Dan Calacci, and David Lazer. 2015. A frame of mind: Using statistical models for detection of framing and agenda setting campaigns. In Proc. of ACL. Andranik Tumasjan, Timm Oliver Sprenger, Philipp G Sandner, and Isabell M Welpe. 2010. Predicting elections with twitter: What 140 characters reveal about political sentiment. In Proc. of ICWSM. Chris J. Vargo, Lei Guo, Maxwell McCombs, and Donald L. Shaw. 2014. Network issue agendas on twitter during the 2012 u.s. presidential election. Journal of Communication 64(2):296–316. Svitlana Volkova, Yoram Bachrach, Michael Armstrong, and Vijay Sharma. 2015. Inferring latent user properties from texts published in social media. In Proc. of AAAI. Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political preferences from streaming communications. In Proc. of ACL. Marilyn A. Walker, Pranav Anand, Robert Abbott, and Ricky Grant. 2012. Stance classification using dialogic properties of persuasion. In Proc. of NAACL. Robert West, Hristo S Paskov, Jure Leskovec, and Christopher Potts. 2014. Exploiting social network structure for person-to-person sentiment analysis. TACL . Janyce Wiebe, Theresa Wilson, Rebecca Bruce, Matthew Bell, and Melanie Martin. 2004. Learning subjective language. Computational linguistics . Tae Yano, Dani Yogatama, and Noah A Smith. 2013. A penny for your tweets: Campaign contributions and capitol hill microblogs. In Proc. of ICWSM. A Supplementary Material In this section we provide additional information about our congressional tweets dataset, as well as the lists of keywords and phrases used to filter tweets by issue and the unigrams used to extract information used for the Unigram and MaxSim PSL predicates. It is important to note that during preprocessing capitalization, stop words, URLs, and punctuation have been removed from tweets in our dataset. Additional word lists along with our PSL scripts and dataset are available at: http://purduenlp.cs.purdue.edu/ projects/twitterframing. Figure 4: Coverage of Frames by Party. Dataset Statistics: Figure 4 shows the coverage of the labeled frames by party. From this, general patterns can be observed. For example, Republicans use Frames 12 and 17 more frequently than Democrats, while Democrats tend to use Frames 4, 9, 10, and 11. Table 7 shows the count of each type of frame that appears in each issue in our labeled dataset. 751 ISSUE FRAMES 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Abortion 4 7 23 55 40 0 2 32 10 0 4 46 20 0 1 13 8 ACA 65 9 6 28 24 0 3 128 21 3 18 116 174 2 21 100 15 Guns 2 2 37 16 30 21 93 8 36 14 49 166 65 0 5 55 147 Immigration 16 7 6 6 42 3 15 0 29 19 7 81 52 1 1 32 2 LGBTQ 0 0 9 99 23 2 2 3 10 17 7 39 14 1 2 11 48 Terrorism 6 4 46 3 11 10 115 1 6 13 14 69 68 35 6 99 57 Table 7: Count of Each Type of Frame Per Issue in Labeled Dataset. ISSUE AND KEYWORDS OR PHRASES ABORTION: abortion, pro-life, pro-choice, Planned Parenthood, StandWithPP, Hobby Lobby, birth control, women’s choice, women’s rights, women’s health ACA: patient protection, affordable care act, ACA, obamacare, health care, healthcare, Burwell, Medicare, Medicaid, repeal and replace GUNS: Charleston, gun, shooting, Emanuel, Second Amendment, Oregon, San Bernadino, gun violence, gun control, 2A, NRA, Orlando, Pulse IMMIGRATION: immigration, immigrants, illegal immigrants, border, amnesty, wall, Dreamers, Dream Act LGBTQ: equality, marriage, gay, transgender, marriage equality, same sex, gay marriage, religious freedom, RFRA, bathroom bill TERRORISM: terrorism, terrorists, terror network, ISIS, ISIL, Al Qaeda, Boko Haram, extremist Table 8: Keywords or Phrases Used to Filter Tweets for Issue. FRAME NUMBER, FRAME, AND ADAPTED UNIGRAMS 1. ECONOMIC: premium(s), small, business(es), tax(es), economy, economic, cost(s), employment, market, spending, billion(s), million(s), company, companies, funding, regulation, benefit(s), health 2. CAPACITY & RESOURCES: resource(s), housing, infrastructure, IRS, national, provide(s), providing, fund(s), funding, natural, enforcement 3. MORALITY & ETHICS: moral, religion(s), religious, honor(able), responsible, responsibility, illegal, protect, god(s), sanctity, Islam, Muslim, Christian, radical, violence, victim(s), church 4. FAIRNESS & EQUALITY: fair(ness), equal(ity), inequality, law(s), right(s), race, gender, class, access, poor, civil, justice, social, women(s), LGBT, LGBTQ, discrimination, decision(s) 5. LEGALITY, CONSTITUTIONALITY, & JURISDICTION: right(s), law(s), executive, ruling, constitution(al), amnesty, decision(s), reproductive, legal, legality, court, SCOTUS, immigration, amendment(s), judge, authority, precedent, legislation 6. CRIME & PUNISHMENT: crime(s), criminal(s), gun(s), violate(s), enforce(s), enforced, enforcement, civil, tribunals, justice, victim(s), civilian(s), kill, murder, hate, genocide, consequences 7. SECURITY & DEFENSE: security, secure, defense, defend, threat(s), terror, terrorism, terrorist(s), gun(s), attack(s), wall, border, safe, safety, violent, violence, ISIS, ISIL, suspect(s), domestic, prevent, protect 8. HEALTH & SAFETY: health(y), care, healthcare, obamacare, access, disease(s), mental, physical, affordable, coverage, quality, (un)insured, disaster, relief, unsafe, cancer, abortion 9. QUALITY OF LIFE: quality, happy, social, community, life, benefit(s), adopt, fear, deportation, living, job(s), activities, family, families, health, support 10. CULTURAL IDENTITY: identity, social, value(s), Reagan, Lincoln, conservative(s), liberal(s), nation, America, American(s), community, communities, country, dreamers, immigrants, refugees, history, historical 11. PUBLIC SENTIMENT: public, sentiment, opinion, poll(s), turning, survey, support, American(s), reform, action, want, need, vote 12. POLITICAL FACTORS & IMPLICATIONS: politic(s), political, stance, view, (bi)partisan, filibuster, lobby, Republican(s), Democrat(s), House, Senate, Congress, committee, party, POTUS, SCOTUS, administration, GOP 13. POLICY DESCRIPTION, PRESCRIPTION, & EVALUATION: policy, fix(ing), work(s), working, propose(d), proposing, proposal, solution, solve, outcome(s), bill, law, amendment, plan, support, repeal, reform 14. EXTERNAL REGULATION AND REPUTATION: regulation, US, ISIS, ISIL, relations, international, national, trade, foreign, state, border, visa, ally, allies, united, refugees, leadership, issues, Iraq, Iran, Syria, Russia, Europe, Mexico, Canada 15. FACTUAL: health, insurance, affordable, deadline, enroll, sign, signed, program, coverage 16. (SELF) PROMOTION: statement, watch, discuss, hearing, today, tonight, live, read, floor, talk, tune, opinion, TV, oped 17. PERSONAL SYMPATHY & SUPPORT: victims, thoughts, prayer(s), pray(ing), family, stand, support, tragedy, senseless, heartbroken, people, condolences, love, remember, forgive(ness), saddened Table 9: Frame and Corresponding Unigrams Used for Initial Supervision. Word Lists: Table 8 lists the keywords or phrases used to filter the entire dataset to only tweets related to the six issues studied in this paper. Table 9 lists the unigrams that were designed based on the descriptions for Frames 1 through 14 provided in the Policy Frames Codebook (Boydstun et al., 2014). These unigrams provide the initial supervision for our models as described in Section 4. 752
2017
69
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 69–76 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1007 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 69–76 Vancouver, Canada, July 30 - August 4, 2017. c⃝2017 Association for Computational Linguistics https://doi.org/10.18653/v1/P17-1007 Skip-Gram – Zipf + Uniform = Vector Additivity Alex Gittens Dept. of Computer Science Rensselaer Polytechnic Institute [email protected] Dimitris Achlioptas Dept. of Computer Science UC Santa Cruz [email protected] Michael W. Mahoney ICSI and Dept. of Statistics UC Berkeley [email protected] Abstract In recent years word-embedding models have gained great popularity due to their remarkable performance on several tasks, including word analogy questions and caption generation. An unexpected “sideeffect” of such models is that their vectors often exhibit compositionality, i.e., adding two word-vectors results in a vector that is only a small angle away from the vector of a word representing the semantic composite of the original words, e.g., “man” + “royal” = “king”. This work provides a theoretical justification for the presence of additive compositionality in word vectors learned using the Skip-Gram model. In particular, it shows that additive compositionality holds in an even stricter sense (small distance rather than small angle) under certain assumptions on the process generating the corpus. As a corollary, it explains the success of vector calculus in solving word analogies. When these assumptions do not hold, this work describes the correct non-linear composition operator. Finally, this work establishes a connection between the Skip-Gram model and the Sufficient Dimensionality Reduction (SDR) framework of Globerson and Tishby: the parameters of SDR models can be obtained from those of Skip-Gram models simply by adding information on symbol frequencies. This shows that SkipGram embeddings are optimal in the sense of Globerson and Tishby and, further, implies that the heuristics commonly used to approximately fit Skip-Gram models can be used to fit SDR models. 1 Introduction The strategy of representing words as vectors has a long history in computational linguistics and machine learning. The general idea is to find a map from words to vectors such that wordsimilarity and vector-similarity are in correspondence. Whilst vector-similarity can be readily quantified in terms of distances and angles, quantifying word-similarity is a more ambiguous task. A key insight in that regard is to posit that the meaning of a word is captured by “the company it keeps” (Firth, 1957) and, therefore, that two words that keep company with similar words are likely to be similar themselves. In the simplest case, one seeks vectors whose inner products approximate the co-occurrence frequencies. In more sophisticated methods cooccurrences are reweighed to suppress the effect of more frequent words (Rohde et al., 2006) and/or to emphasize pairs of words whose co-occurrence frequency maximally deviates from the independence assumption (Church and Hanks, 1990). An alternative to seeking word-embeddings that reflect co-occurrence statistics is to extract the vectorial representation of words from non-linear statistical language models, specifically neural networks. (Bengio et al., 2003) already proposed (i) associating with each vocabulary word a feature vector, (ii) expressing the probability function of word sequences in terms of the feature vectors of the words in the sequence, and (iii) learning simultaneously the vectors and the parameters of the probability function. This approach came into prominence recently through works of Mikolov et al. (see below) whose main departure from (Bengio et al., 2003) was to follow the suggestion of (Mnih and Hinton, 2007) and tradeaway the expressive capacity of general neuralnetwork models for the scalability (to very large 69 corpora) afforded by (the more restricted class of) log-linear models. An unexpected side effect of deriving wordembeddings via neural networks is that the wordvectors produced appear to enjoy (approximate) additive compositionality: adding two wordvectors often results in a vector whose nearest word-vector belongs to the word capturing the composition of the added words, e.g., “man” + “royal” = “king” (Mikolov et al., 2013c). This unexpected property allows one to use these vectors to answer word-analogy questions algebraically, e.g., answering the question “Man is to king as woman is to ” by returning the word whose word-vector is nearest to the vector v(king) - v(man) + v(woman). In this work we focus on explaining the source of this phenomenon for the most prominent such model, namely the Skip-Gram model introduced in (Mikolov et al., 2013a). The Skip-Gram model learns vector representations of words based on their patterns of co-occurrence in the training corpus as follows: it assigns to each word c in the vocabulary V , a “context” and a “target” vector, respectively uc and vc, which are to be used in order to predict the words that appear around each occurrence of c within a window of ∆tokens. Specifically, the log probability of any target word w to occur at any position within distance ∆of a context word c is taken to be proportional to the inner product between uc and vw, i.e., letting n = |V |, p(w|c) = euT c vw Pn i=1 euTc vi . (1) Further, Skip-Gram assumes that the conditional probability of each possible set of words in a window around a context word c factorizes as the product of the respective conditional probabilities: p(w−∆, . . . , w∆|c) = ∆ Y δ=−∆ δ̸=0 p(wδ|c). (2) (Mikolov et al., 2013a) proposed learning the Skip-Gram parameters on a training corpus by using maximum likelihood estimation under (1) and (2). Thus, if wi denotes the i-th word in the training corpus and T the length of the corpus, we seek the word vectors that maximize 1 T T X i=1 ∆ X δ=−∆ δ̸=0 log p(wi+δ|wi) . (3) As mentioned, the normalized context vectors obtained from maximizing (3) under (1) and (2) exhibit additive compositionality. For example, the cosine distance between the sum of the context vectors of the words “Vietnam” and “capital” and the context vector of the word “Hanoi” is small. While there has been much interest in using algebraic operations on word vectors to carry out semantic operations like composition, and mathematically-flavored explanations have been offered (e.g., in the recent work (Paperno and Baroni, 2016)), the only published work which attempts a rigorous theoretical understanding of this phenomenon is (Arora et al., 2016). This work guarantees that word vectors can be recovered by factorizing the so-called PMI matrix, and that algebraic operations on these word vectors can be used to solve analogies, under certain conditions on the process that generated the training corpus. Specifically, the word vectors must be known a priori, before their recovery, and to have been generated by randomly scaling uniformly sampled vectors from the unit sphere1. Further, the ith word in the corpus must have been selected with probability proportional to euT wci, where the “discourse” vector ci governs the topic of the corpus at the ith word. Finally, the discourse vector is assumed to evolve according to a random walk on the unit sphere that has a uniform stationary distribution. By way of contrast, our results assume nothing a priori about the properties of the word vectors. In fact, the connection we establish between the Skip-Gram and the Sufficient Dimensionality Reduction model of (Globerson and Tishby, 2003) shows that the word vectors learned by Skip-Gram are information-theoretically optimal. Further, the context word c in the Skip-Gram model essentially serves the role that the discourse vector does in the PMI model of (Arora et al., 2016): the words neighboring c are selected with probability proportional to euT c vw. We find the exact non-linear composition operator when no assumptions are made on the context word. When an analogous assumption to that of (Arora et al., 2016) is made, that the 1More generally, it suffices that the word vectors have certain properties consistent with this sampling process. 70 context words are uniformly distributed, we prove that the composition operator reduces to vector addition. While our primary motivation has been to provide a better theoretical understanding of word compositionality in the popular Skip-Gram model, our connection with the SDR method illuminates a much more general point about the practical applicability of the Skip-Gram model. In particular, it addresses the question of whether, for a given corpus, fitting a Skip-Gram model will give good embeddings. Even if we are making reasonable linguistic assumptions about how to model words and the interdependencies of words in a corpus, it’s not clear that these have to hold universally on all corpuses to which we apply Skip-Gram. However, the fact that when we fit a Skip-Gram model we are fitting an SDR model (up to frequency information), and the fact that SDR models are information-theoretically optimal in a certain sense, argues that regardless of whether the Skip-Gram assumptions hold, Skip-Gram always gives us optimal features in the following sense: the learned context embeddings and target embeddings preserve the maximal amount of mutual information between any pair of random variables X and Y consistent with the observed co-occurence matrix, where Y is the target word and X is the predictor word (in a min-max sense, since there are many ways of coupling X and Y , each of which may have different amounts of mutual information). Importantly, this statement requires no assumptions on the distribution P(X, Y ). 2 Compositionality of Skip-Gram In this section, we first give a mathematical formulation of the intuitive notion of compositionality of words. We then prove that the composition operator for the Skip-Gram model in full generality is a non-linear function of the vectors of the words being composed. Under a single simplifying assumption, the operator linearizes and reduces to the addition of the word vectors. Finally, we explain how linear compositionality allows for solving word analogies with vector algebra. A natural way of capturing the compositionality of words is to say that the set of context words c1, . . . , cm has the same meaning as the single word c if for every other word w, p(w|c1, . . . , cm) = p(w|c) . Although this is an intuitively satisfying definition, we never expect it to hold exactly; instead, we replace exact equality with the minimization of KL-divergence. That is, we state that the best candidate for having the same meaning as the set of context words C is the word arg min c∈V DKL(p(·|C) | p(·|c)) . (4) We refer to any vector that minimizes (4) as a paraphrase of the set of words C. There are two natural concerns with (4). The first is that, in general, it is not clear how to define p(·|C). The second is that KL-divergence minimization is a hard problem, as it involves optimization over many high dimensional probability distributions. Our main result shows that both of these problems go away for any language model that satisfies the following two assumptions: A1. For every word c, there exists Zc such that for every word w, p(w|c) = 1 Zc exp(uT c vw) . (5) A2. For every set of words C = {c1, c2, . . . , cm}, there exists ZC such that for every word w, p(w|C) = p(w)1−m ZC m Y i=1 p(w|ci) . (6) Clearly, the Skip-Gram model satisfies A1 by definition. We prove that it also satisfies A2 when m ≤∆(Lemma 1). Next, we state a theorem that holds for any model satisfying assumptions A1 and A2, including the Skip-Gram model when m ≤∆. Theorem 1. In every word model that satisfies A1 and A2, for every set of words C = {c1, . . . , cm}, any paraphase c of C satisfies X w∈V p(w|c)vw = X w∈V p(w|C)vw . (7) Theorem 1 characterizes the composition operator for any language model which satisfies our two assumptions; in general, this operator is not addition. Instead, a paraphrase c is a vector such that the average word vector under p(·|c) matches that under p(·|C). When the expectations in (7) can be computed, the composition operator can be implemented by solving a non-linear system of equations to find a vector u for which the left-hand side of (7) equals the right-hand side. 71 Our next result proves that although the composition operator is nontrivial in the general case, to recover vector addition as the composition operator, it suffices to assume that the word frequency is uniform. Theorem 2. In every word model that satisfies A1, A2, and where p(w) = 1/|V | for every w ∈V , the paraphrase of C = {c1, . . . , cm} is u1 + . . . + um . As word frequencies are typically much closer to a Zipf distribution (Piantadosi, 2014), the uniformity assumption of Theorem 2 is not realistic. That said, we feel it is important to point out that, as reported in (Mikolov et al., 2013b), additivity captures compositionality more accurately when the training set is manipulated so that the prior distribution of the words is made closer to uniform. Using composition to solve analogies. It has been observed that word vectors trained using nonlinear models like Skip-Gram tend to encode semantic relationships between words as linear relationships between the word vectors (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014). In particular, analogies of the form “man:woman::king:?” can often be solved by taking ? to be the word in the vocabulary whose context vector has the smallest angle with uwoman + (uking −uman). Theorems 1 and 2 offer insight into the solution such analogy questions. We first consider solving an analogy of the form “m:w::k:?”” in the case where the composition operator is nonlinear. The fact that m and w share a relationship means m is a paraphrase of the set of words {w, R}, where R is a set of words encoding the relationship between m and w. Similarly, the fact that k and ? share the same relationship means k is a paraphrase of the set of words {?, R}. By Theorem 1, we have that R and ? must satisfy X ℓ∈V p(ℓ|m)vℓ= X ℓ∈V p(ℓ|w, R)vℓ and X ℓ∈V p(ℓ|k)vℓ= X ℓ∈V p(ℓ|?, R)vℓ. We see that solving analogies when the composition operator is nonlinear requires the solution of two highly nonlinear systems of equations. In sharp contrast, when the composition operator is linear, the solution of analogies delightfully reduces to elementary vector algebra. To see this, we again begin with the assertion that the fact that m and w share a relationship means m is a paraphrase of the set of words {w, R}; Similarly, k is a paraphrase of {?, R}. By Theorem 2, um = uw + ur and uk = u? + ur, which gives the expected relationship u? = uk + (uw −um). Note that because this expression for u? is in terms of k, w, and m, there is actually no need to assume that R is a set of actual words in V . 2.1 Proofs Proof of Theorem 1. Note that p(w|C) equals p(w)1−m ZC m Y i=1 p(w|ci) = p(w)1−m ZC exp m X i=1 uT civw − m X i=1 log Zci ! = 1 Z p(w)1−m exp(uT Cvw) , where Z = ZC Qm i=1 Zi, and uC = Pm i=1 ui. Minimizing the KL-divergence DKL(p(·|c1, . . . , cm)∥p(·|c)) as a function of c is equivalent to maximizing the negative cross-entropy as a function of uc, i.e., as maximizing Q(uc) = Z X w exp(uT Cvw) p(w)m−1 (uT c vw −log Zc) . Since Q is concave, the maximizers occur where its gradient vanishes. As ∇ucQ equals Z X w exp(uT Cvw) p(w)m−1  vw − Pn ℓ=1 exp(uT c vℓ)vℓ Pn k=1 exp(uTc vk)  = Pn ℓ=1 exp(uT c vℓ)vℓ Pn k=1 exp(uTc vk) −Z X w exp(uT Cvw)vw p(w)m−1 = X w∈V p(w|c)vw − X w∈V p(w|c1, . . . , cm)vw , we see that (7) follows. Proof of Theorem 2. Recall that uC = Pm i=1 ui. When p(w) = 1/|V | for all w ∈V , the negative cross-entropy simplifies to Q(uc) = Z X w exp uT Cvw  (uT c vw −log Zc) , 72 and its gradient ∇ucQ to Z X w exp(uCT vw)  vw − Pn ℓ=1 exp(uT c vℓ)vℓ Pn k=1 exp(uTc vk)  = Z X w exp(uCT vw)vw − X w exp(uT c vw)vw . Thus, ∇Q(uC) = 0 and since Q is concave, uC is its unique maximizer. Lemma 1. The Skip-Gram model satisfies assumption A2 when m ≤∆. Proof of Lemma 1. First, assume that m = ∆. In the Skip-Gram model target words are conditionally independent given a context word, i.e., p(c1, . . . , cm|w) = m Y i=1 p(ci|w). Applying Baye’s rule, p(w|c1, . . . , cm) = p(c1, . . . , cm|w)p(w) p(c1, . . . , cm) = p(w) p(c1, . . . , cm) m Y i=1 p(ci|w) = p(w) p(c1, . . . , cm) m Y i=1 p(w|ci)p(ci) p(w) = p(w)1−m ZC m Y i=1 p(w|ci) , (8) where ZC = 1/ (Qm i=1 p(ci)). This establishes the result when m = ∆. The cases m < ∆follow by marginalizing out ∆−m context words in the equality (8). Projection of paraphrases onto the vocabulary Theorem 2 states that if there is a word c in the vocabulary V whose context vector equals the sum of the context vectors of the words c1, . . . , cm, then c has the same “meaning”, in the sense of (4), as the composition of the words c1, . . . , cm. For any given set of words C = {c1, . . . , cm}, it is unlikely that there exists a word c ∈V whose context vector is exactly equal to the sum of the context vectors of the words c1, . . . , cm. Similarly, in Theorem 1, the solution(s) to (7) will most likely not equal the context vector of any word in V . In both cases, we thus need to project the vector(s) onto words in our vocabulary in some manner. Since Theorem 1 holds for any prior over V , in theory, we could enumerate all words in V and find the word(s) that minimize the difference of the left hand side of (7) from the right hand side. In practice, it turns out that the angle between the context vector of a word w ∈V and solutionvector(s) is a good proxy and one gets very good experimental results by selecting as the paraphrase of a collection of words, the word that minimizes the angle to the paraphrase vector. Minimizing the angle has been empirically successful at capturing composition in multiple loglinear word models. One way to understand the success of this approach is to recall that each word c is characterized by a categorical distribution over all other words w, as stated in (1). The peaks of this categorical distribution are precisely the words with which c co-occurs most often. These words characterize c more than all the other words in the vocabulary, so it is reasonable to expect that a word c′ whose categorical distribution has similar peaks as the categorical distribution of c is similar in meaning to c. Note that the location of the peaks of p(·|c) are immune to the scaling of uc (athough the values of p(·|c) may change); thus, the words w which best characterize c are those for which vw has a high inner product with uc/∥uc∥2. Since uT c vw ∥uc∥2 −uT c′vw ∥uc′∥2 ≤ s 2  1 − uTc uc′ ∥uc∥2∥uc′∥2  ∥vw∥2, it is clear that if the angle between the context representations of c and c′ is small, the distributions p(w|c) and p(w|c′) will tend to have similar peaks. 3 Skip-Gram learns a Sufficient Dimensionality Reduction Model The Skip-Gram model assumes that the distribution of the neighbors of a word follows a specific exponential parametrization of a categorical distribution. There is empirical evidence that this model generates features that are useful for NLP tasks, but there is no a priori guarantee that the training corpus was generated in this manner. In this section, we provide theoretical support for the usefulness of the features learned even when the SkipGram model is misspecified. To do so, we draw a connection between SkipGram and the Sufficient Dimensionality Reduction (SDR) factorization of Globerson and Tishby (Globerson and Tishby, 2003). The SDR model 73 learns optimal2 embeddings for discrete random variables X and Y without assuming any parametric form on the distributions of X and Y , and it is useful in a variety of applications, including information retrieval, document classification, and association analysis (Globerson and Tishby, 2003). As it turns out, these embeddings, like Skip-Gram, are obtained by learning the parameters of an exponentially parameterized distribution. In Theorem 3 below, we show that if a SkipGram model is fit to the cooccurence statistics of X and Y , then the output can be trivially modified (by adding readily-available information on word frequencies) to obtain the parameters of an SDR model. This connection is significant for two reasons: first, the original algorithm of (Globerson and Tishby, 2003) for learning SDR embeddings is expensive, as it involves information projections. Theorem 3 shows that if one can efficiently fit a Skip-Gram model, then one can efficiently fit an SDR model. This implies that Skip-Gram specific approximation heuristics like negativesampling, hierarchical softmax, and Glove, which are believed to return high-quality approximations to Skip-Gram parameters (Mikolov et al., 2013b; Pennington et al., 2014), can be used to efficiently approximate SDR model parameters. Second, (Globerson and Tishby, 2003) argues for the optimality of the SDR embedding in any domain where the training information on X and Y consists of their coocurrence statistics; this optimality and the Skip-Gram/SDR connection argues for the use of Skip-Gram approximations in such domains, and supports the positive experimental results that have been observed in applications in network science (Grover and Leskovec, 2016), proteinomics (Asgari and Mofrad, 2015), and other fields. As stated above, the SDR factorization solves the problem of finding information-theoretically optimal features, given co-occurrence statistics for a pair of discrete random variables X and Y . Associate a vector wi to the ith state of X, a vector hj to the jth state of Y , and let W = [wT 1 · · · wT |X|]T and H be defined similarly. Globerson and Tishby show that such optimal features can be obtained from a low-rank factoriza2Optimal in an information-theoretic sense: they preserve the maximal mutual information between any pair of random variables with the observed coocurrence statistics, without regard to the underlying joint distribution. tion of the matrix G of co-occurence measurements: Gij counts the number of times state i of X has been observed to co-occur with state j of Y. The loss of this factorization is measured using the KL-divergence, and so the optimal features are obtained from solving the problem arg min W,H DKL  G ZG 1 ZW,H eWHT  . Here, ZG = P ij Gij normalizes G into an estimate of the joint pmf of X and Y , and similarly ZW,H is the constant that normalizes eWHT into a joint pmf. The expression eWHT denotes entrywise exponentiation of WHT . Now we revisit the Skip-Gram training objective, and show that it differs from the SDR objective only slightly. Whereas the SDR objective measures the distance between the pmfs given by (normalized versions of) G and eWHT , the SkipGram objective measures the distance between the pmfs given by (normalized versions of) the rows of G and eWHT . That is, SDR emphasizes fitting the entire pmfs, while Skip-Gram emphasizes fitting conditional distributions. Before presenting our main result, we state and prove the following lemma, which is of independent interest and is used in the proof of our main theorem. Recall that Skip-Gram represents each word c as a multinomial distribution over all other words w, and it learns the parameters for these distributions by a maximum likelihood estimation. It is known that learning model parameters by maximum likelihood estimation is equivalent to minimizing the KL-divergence of the learned model from the empirical distribution; the following lemma establishes the KL-divergence that Skip-Gram minimizes. Lemma 2. Let G be the word co-occurrence matrix constructed from the corpus on which a SkipGram model is trained, in which case Gcw is the number of times word w occurs as a neighboring word of c in the corpus. For each word c, let gc denote the empirical frequency of the word in the corpus, so that gc = X w Gcw/ X t,w Gt,w. Given a positive vector x, let ˆx = x/∥x∥1. Then, the Skip-Gram model parameters U =  u1 · · · u|V | T and V =  v1 · · · u|V | T 74 minimize the objective X c gcDKL( ˆgc ∥\ euTc VT ), where gc is the cth row of G. Proof. Recall that Skip-Gram chooses U and V to maximize Q = 1 T T X i=1 C X δ=−C δ̸=0 log p(wi+δ|wi) , where p(w|c) = euT c vw Pn i=1 euTc vi . This objective can be rewritten using the pairwise cooccurence statistics as Q= 1 T X c,w Gcw log p(w|c) = 1 T X c " X t Gct ! X w Gcw P t Gct log p(w|c) # ∝1 T X c " (P t Gct) (P tw Gtw) X w Gcw P t Gct log p(w|c) # = X c gc X w ˆgc w log p(w|c) ! = X c gc −DKL( ˆgc ∥p(·|c)) −H( ˆgc)  , where H(·) denotes the entropy of a distribution. It follows that since Skip-Gram maximizes Q, it minimizes X c gcDKL( ˆgc ∥p(·|c))= X c gcDKL( ˆgc ∥\ euTc VT ). We now prove our main theorem of this section, which states that SDR parameters can be obtained by augmenting the Skip-Gram embeddings to account for word frequencies. Theorem 3. Let U, V be the results of fitting a Skip-Gram model to G, and consider the augmented matrices ˜U = [U | α] and ˜V = [V | 1], where αc = log  gc P w euTc vw  and gc = P w Gc,w P t,w Gt,w . Then, the features ( ˜U, ˜V) constitute a sufficient dimensionality reduction of G. Proof. For convenience, let G denote the joint pdf matrix G/ZG, and let bG denote the matrix obtained by normalizing each row of G to be a probability distribution. Then, it suffices to show that DKL(G ∥qW,H) is minimized over the set of probability distributions  qW,H qW,H(w, c) = 1 Z  eWHT  cw  , when W = ˜U and H = ˜V. To establish this result, we use a chain rule for the KL-divergence. Recall that if we denote the expected KL-divergence between two marginal pmfs by DKL(p(·|c)∥q(·|c)) = X c p(c) X w p(w|c) log p(w|c) q(w|c) ! , then the KL-divergence satisfies the chain rule: DKL(p(w, c)∥q(w, c)) = DKL(p(c)∥q(c)) + DKL(p(w|c)∥q(w|c)). Using this chain rule, we get DKL(G ∥qW,H(w, c)) (9) =DKL(g ∥qW,H(c))+DKL( bG∥qW,H(w|c)). Note that the second term in this sum is, in the notation of Lemma 2, DKL( bG∥qW,H(w|c)) = X c gcDKL( ˆgc ∥\ ewTc HT ), so the matrices U and V that are returned by fitting the Skip-Gram model minimize the second term in this sum. We now show that the augmented matrices W = ˜U and H = ˜V also minimize this second term, and in addition they make the first term vanish. To see that the first of these claims holds, i.e., that the augmented matrices make the second term in (9) vanish, note that q ˜U, ˜V(w|c) ∝e˜uT c ˜vw = euT c vw+αc ∝qU,V(w|c), and the constant of proportionality is independent of w. It follows that q ˜U, ˜V(w|c) = qU,V(w|c) and DKL( bG ∥q ˜U, ˜V(w|c)) = DKL( bG ∥qU,V(w|c)). Thus, the choice W = ˜U and H = ˜V minimizes the second term in (9). 75 To see that the augmented matrices make the first term in (9) vanish, observe that when W = ˜U and H = ˜V, we have that q ˜U, ˜V(c) = g by construction. This can be verified by calculation: q ˜U, ˜V(c) = P w q ˜U, ˜V(w, c) P w,t q ˜U, ˜V(w, t) = P w euT c vw+αc P w,t euT t vw+αt = P w euT c vw  eαc P t P w euT t vw  eαt = h (eUVT 1) ⊙eαi c 1T h (eUVT 1) ⊙eα i. Here, the notation x ⊙y denotes entry-wise multiplication of vectors. Since αc = log(gc) −log  eUVT 1  c  , we have q ˜U, ˜V(c) = h (eUVT 1) ⊙eαi c 1T h (eUVT 1) ⊙eα i = gc P t gt = gc. The choice W = ˜U and H = ˜V makes the first term in (9) vanish, and it also minimizes the second term in (9). Thus, it follows that the features ( ˜U, ˜V) constitute a sufficient dimensionality reduction of G. References Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to PMI-based word embeddings. Transactions of the Association for Computational Linguistics 4:385–399. Ehsaneddin Asgari and Mohammad R.K. Mofrad. 2015. Continuous distributed representation of biological sequences for deep proteomics and genomics. PloS One 10(11). Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A Neural Probabilistic Language Model. Journal Of Machine Learning Research 3:1137–1155. Kenneth Ward Church and Patrick Hanks. 1990. Word Association Norms, Mutual Information, and Lexicography. Computational Linguistics 16(1):22–29. J.R. Firth. 1957. A synopsis of linguistic theory 19301955. Studies in Linguistic Analysis pages 1–32. Amir Globerson and Naftali Tishby. 2003. Sufficient Dimensionality Reduction. Journal of Machine Learning Research 3:1307–1331. Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable Feature Learning for Networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pages 855–864. Omer Levy and Yoav Goldberg. 2014. Linguistic Regularities in Sparse and Explicit Word Representations. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning. pages 171–180. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient Estimation of Word Representations in Vector Space. In International Conference on Learning Representations. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems. pages 3111–3119. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings. pages 746–751. Andriy Mnih and Geoffrey Hinton. 2007. Three New Graphical Models for Statistical Language Modelling. In Proceedings of the 24th International Conference on Machine Learning. ACM, pages 641–648. Denis Paperno and Marco Baroni. 2016. When the Whole is Less than the Sum of Its Parts: How Composition Affects PMI Values in Distributional Semantic Vectors. Computational Linguistics 42:345– 350. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532– 1543. Steven T. Piantadosi. 2014. Zipf’s word frequency law in natural language: A critical review and future directions. Psychonomic Bulletin & Review 21(5):1112–1130. Douglas L. T. Rohde, Laura M. Gonnerman, and David C. Plaut. 2006. An improved model of semantic similarity based on lexical co-occurence. Communications of the ACM 8:627–633. 76
2017
7